Nov 23 23:10:01.828365 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:10:01.828388 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:10:01.828398 kernel: KASLR enabled Nov 23 23:10:01.828404 kernel: efi: EFI v2.7 by EDK II Nov 23 23:10:01.828410 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Nov 23 23:10:01.828415 kernel: random: crng init done Nov 23 23:10:01.828422 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 23 23:10:01.828428 kernel: secureboot: Secure boot enabled Nov 23 23:10:01.828434 kernel: ACPI: Early table checksum verification disabled Nov 23 23:10:01.828441 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Nov 23 23:10:01.828447 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:10:01.828454 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828459 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828466 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828473 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828481 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828487 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828493 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828499 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828505 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:10:01.828511 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 23:10:01.828518 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:10:01.828524 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:10:01.828530 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Nov 23 23:10:01.828536 kernel: Zone ranges: Nov 23 23:10:01.828543 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:10:01.828549 kernel: DMA32 empty Nov 23 23:10:01.828555 kernel: Normal empty Nov 23 23:10:01.828561 kernel: Device empty Nov 23 23:10:01.828567 kernel: Movable zone start for each node Nov 23 23:10:01.828573 kernel: Early memory node ranges Nov 23 23:10:01.828579 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Nov 23 23:10:01.828585 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Nov 23 23:10:01.828591 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Nov 23 23:10:01.828597 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Nov 23 23:10:01.828603 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Nov 23 23:10:01.828610 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Nov 23 23:10:01.828617 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Nov 23 23:10:01.828624 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Nov 23 23:10:01.828630 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 23:10:01.828639 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:10:01.828645 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 23:10:01.828652 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Nov 23 23:10:01.828658 kernel: psci: probing for conduit method from ACPI. Nov 23 23:10:01.828666 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:10:01.828673 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:10:01.828679 kernel: psci: Trusted OS migration not required Nov 23 23:10:01.828685 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:10:01.828692 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:10:01.828698 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:10:01.828705 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:10:01.828712 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 23:10:01.828718 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:10:01.828726 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:10:01.828733 kernel: CPU features: detected: Spectre-v4 Nov 23 23:10:01.828739 kernel: CPU features: detected: Spectre-BHB Nov 23 23:10:01.828746 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:10:01.828752 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:10:01.828759 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:10:01.828766 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:10:01.828773 kernel: alternatives: applying boot alternatives Nov 23 23:10:01.828780 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:10:01.828787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:10:01.828794 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:10:01.828803 kernel: Fallback order for Node 0: 0 Nov 23 23:10:01.828809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 23:10:01.828816 kernel: Policy zone: DMA Nov 23 23:10:01.828823 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:10:01.828829 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 23:10:01.828836 kernel: software IO TLB: area num 4. Nov 23 23:10:01.828843 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 23:10:01.828849 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Nov 23 23:10:01.828856 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 23:10:01.828862 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:10:01.828870 kernel: rcu: RCU event tracing is enabled. Nov 23 23:10:01.828884 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 23:10:01.828893 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:10:01.828911 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:10:01.828918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:10:01.828925 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 23:10:01.828932 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:10:01.828939 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:10:01.828945 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:10:01.828952 kernel: GICv3: 256 SPIs implemented Nov 23 23:10:01.828958 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:10:01.828965 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:10:01.828971 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:10:01.828978 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:10:01.828986 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:10:01.828993 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:10:01.829000 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:10:01.829008 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:10:01.829015 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 23:10:01.829021 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 23:10:01.829028 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:10:01.829034 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:10:01.829041 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:10:01.829047 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:10:01.829054 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:10:01.829074 kernel: arm-pv: using stolen time PV Nov 23 23:10:01.829081 kernel: Console: colour dummy device 80x25 Nov 23 23:10:01.829088 kernel: ACPI: Core revision 20240827 Nov 23 23:10:01.829095 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:10:01.829103 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:10:01.829110 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:10:01.829117 kernel: landlock: Up and running. Nov 23 23:10:01.829123 kernel: SELinux: Initializing. Nov 23 23:10:01.829130 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:10:01.829138 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:10:01.829145 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:10:01.829152 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:10:01.829158 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:10:01.829165 kernel: Remapping and enabling EFI services. Nov 23 23:10:01.829172 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:10:01.829178 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:10:01.829185 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:10:01.829192 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 23:10:01.829200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:10:01.829212 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:10:01.829224 kernel: Detected PIPT I-cache on CPU2 Nov 23 23:10:01.829232 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 23:10:01.829239 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 23:10:01.829246 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:10:01.829253 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 23:10:01.829260 kernel: Detected PIPT I-cache on CPU3 Nov 23 23:10:01.829270 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 23:10:01.829277 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 23:10:01.829284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:10:01.829291 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 23:10:01.829298 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 23:10:01.829305 kernel: SMP: Total of 4 processors activated. Nov 23 23:10:01.829312 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:10:01.829319 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:10:01.829327 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:10:01.829334 kernel: CPU features: detected: Common not Private translations Nov 23 23:10:01.829343 kernel: CPU features: detected: CRC32 instructions Nov 23 23:10:01.829350 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:10:01.829357 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:10:01.829364 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:10:01.829371 kernel: CPU features: detected: Privileged Access Never Nov 23 23:10:01.829378 kernel: CPU features: detected: RAS Extension Support Nov 23 23:10:01.829385 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:10:01.829392 kernel: alternatives: applying system-wide alternatives Nov 23 23:10:01.829399 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 23:10:01.829408 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Nov 23 23:10:01.829415 kernel: devtmpfs: initialized Nov 23 23:10:01.829422 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:10:01.829429 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 23:10:01.829436 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:10:01.829443 kernel: 0 pages in range for non-PLT usage Nov 23 23:10:01.829449 kernel: 508400 pages in range for PLT usage Nov 23 23:10:01.829456 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:10:01.829463 kernel: SMBIOS 3.0.0 present. Nov 23 23:10:01.829472 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 23:10:01.829479 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:10:01.829486 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:10:01.829493 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:10:01.829501 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:10:01.829508 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:10:01.829515 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:10:01.829522 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Nov 23 23:10:01.829529 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:10:01.829537 kernel: cpuidle: using governor menu Nov 23 23:10:01.829544 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:10:01.829551 kernel: ASID allocator initialised with 32768 entries Nov 23 23:10:01.829558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:10:01.829565 kernel: Serial: AMBA PL011 UART driver Nov 23 23:10:01.829572 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:10:01.829579 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:10:01.829586 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:10:01.829593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:10:01.829601 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:10:01.829609 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:10:01.829616 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:10:01.829623 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:10:01.829630 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:10:01.829636 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:10:01.829643 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:10:01.829651 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:10:01.829657 kernel: ACPI: Interpreter enabled Nov 23 23:10:01.829666 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:10:01.829673 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:10:01.829680 kernel: ACPI: CPU0 has been hot-added Nov 23 23:10:01.829687 kernel: ACPI: CPU1 has been hot-added Nov 23 23:10:01.829694 kernel: ACPI: CPU2 has been hot-added Nov 23 23:10:01.829700 kernel: ACPI: CPU3 has been hot-added Nov 23 23:10:01.829708 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:10:01.829715 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:10:01.829722 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:10:01.829873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:10:01.829992 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:10:01.830056 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:10:01.830114 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:10:01.830172 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:10:01.830181 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:10:01.830189 kernel: PCI host bridge to bus 0000:00 Nov 23 23:10:01.830263 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:10:01.830324 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:10:01.830378 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:10:01.830433 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:10:01.830516 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:10:01.830588 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 23:10:01.830651 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 23:10:01.830712 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 23:10:01.830773 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:10:01.830861 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:10:01.831013 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 23:10:01.831101 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 23:10:01.831183 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:10:01.831240 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:10:01.831297 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:10:01.831306 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:10:01.831314 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:10:01.831321 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:10:01.831328 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:10:01.831335 kernel: iommu: Default domain type: Translated Nov 23 23:10:01.831342 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:10:01.831349 kernel: efivars: Registered efivars operations Nov 23 23:10:01.831358 kernel: vgaarb: loaded Nov 23 23:10:01.831365 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:10:01.831372 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:10:01.831379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:10:01.831386 kernel: pnp: PnP ACPI init Nov 23 23:10:01.831460 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:10:01.831470 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:10:01.831477 kernel: NET: Registered PF_INET protocol family Nov 23 23:10:01.831486 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:10:01.831493 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:10:01.831501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:10:01.831508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:10:01.831515 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:10:01.831522 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:10:01.831529 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:10:01.831536 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:10:01.831543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:10:01.831551 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:10:01.831558 kernel: kvm [1]: HYP mode not available Nov 23 23:10:01.831565 kernel: Initialise system trusted keyrings Nov 23 23:10:01.831572 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:10:01.831579 kernel: Key type asymmetric registered Nov 23 23:10:01.831586 kernel: Asymmetric key parser 'x509' registered Nov 23 23:10:01.831593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:10:01.831601 kernel: io scheduler mq-deadline registered Nov 23 23:10:01.831608 kernel: io scheduler kyber registered Nov 23 23:10:01.831616 kernel: io scheduler bfq registered Nov 23 23:10:01.831623 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:10:01.831630 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:10:01.831638 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:10:01.831698 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 23:10:01.831707 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:10:01.831715 kernel: thunder_xcv, ver 1.0 Nov 23 23:10:01.831722 kernel: thunder_bgx, ver 1.0 Nov 23 23:10:01.831729 kernel: nicpf, ver 1.0 Nov 23 23:10:01.831738 kernel: nicvf, ver 1.0 Nov 23 23:10:01.831807 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:10:01.831864 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:10:01 UTC (1763939401) Nov 23 23:10:01.831873 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:10:01.831887 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:10:01.831895 kernel: watchdog: NMI not fully supported Nov 23 23:10:01.831910 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:10:01.831918 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:10:01.831928 kernel: Segment Routing with IPv6 Nov 23 23:10:01.831935 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:10:01.831942 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:10:01.831949 kernel: Key type dns_resolver registered Nov 23 23:10:01.831955 kernel: registered taskstats version 1 Nov 23 23:10:01.831962 kernel: Loading compiled-in X.509 certificates Nov 23 23:10:01.831970 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:10:01.831977 kernel: Demotion targets for Node 0: null Nov 23 23:10:01.831984 kernel: Key type .fscrypt registered Nov 23 23:10:01.831992 kernel: Key type fscrypt-provisioning registered Nov 23 23:10:01.831999 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:10:01.832007 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:10:01.832014 kernel: ima: No architecture policies found Nov 23 23:10:01.832021 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:10:01.832028 kernel: clk: Disabling unused clocks Nov 23 23:10:01.832035 kernel: PM: genpd: Disabling unused power domains Nov 23 23:10:01.832042 kernel: Warning: unable to open an initial console. Nov 23 23:10:01.832049 kernel: Freeing unused kernel memory: 39552K Nov 23 23:10:01.832058 kernel: Run /init as init process Nov 23 23:10:01.832065 kernel: with arguments: Nov 23 23:10:01.832072 kernel: /init Nov 23 23:10:01.832079 kernel: with environment: Nov 23 23:10:01.832085 kernel: HOME=/ Nov 23 23:10:01.832092 kernel: TERM=linux Nov 23 23:10:01.832100 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:10:01.832111 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:10:01.832120 systemd[1]: Detected virtualization kvm. Nov 23 23:10:01.832128 systemd[1]: Detected architecture arm64. Nov 23 23:10:01.832135 systemd[1]: Running in initrd. Nov 23 23:10:01.832142 systemd[1]: No hostname configured, using default hostname. Nov 23 23:10:01.832150 systemd[1]: Hostname set to . Nov 23 23:10:01.832157 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:10:01.832164 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:10:01.832172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:10:01.832181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:10:01.832189 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:10:01.832197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:10:01.832204 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:10:01.832213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:10:01.832221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:10:01.832230 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:10:01.832238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:10:01.832245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:10:01.832253 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:10:01.832260 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:10:01.832268 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:10:01.832275 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:10:01.832283 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:10:01.832291 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:10:01.832299 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:10:01.832307 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:10:01.832315 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:10:01.832322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:10:01.832330 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:10:01.832337 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:10:01.832345 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:10:01.832353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:10:01.832362 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:10:01.832370 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:10:01.832378 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:10:01.832385 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:10:01.832393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:10:01.832400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:10:01.832408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:10:01.832418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:10:01.832426 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:10:01.832434 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:10:01.832462 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 23:10:01.832485 systemd-journald[245]: Journal started Nov 23 23:10:01.832502 systemd-journald[245]: Runtime Journal (/run/log/journal/98bdf99479734d7daabce7a1234b260f) is 6M, max 48.5M, 42.4M free. Nov 23 23:10:01.838013 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:10:01.820274 systemd-modules-load[246]: Inserted module 'overlay' Nov 23 23:10:01.841057 kernel: Bridge firewalling registered Nov 23 23:10:01.841084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:10:01.839744 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 23 23:10:01.845007 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:10:01.846484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:10:01.848046 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:10:01.852887 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:10:01.854982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:10:01.857206 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:10:01.871951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:10:01.881281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:10:01.883053 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:10:01.886560 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:10:01.890769 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:10:01.894429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:10:01.896731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:10:01.899185 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:10:01.919056 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:10:01.934611 systemd-resolved[289]: Positive Trust Anchors: Nov 23 23:10:01.934630 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:10:01.934661 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:10:01.941253 systemd-resolved[289]: Defaulting to hostname 'linux'. Nov 23 23:10:01.942303 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:10:01.946200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:10:02.004934 kernel: SCSI subsystem initialized Nov 23 23:10:02.009924 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:10:02.017942 kernel: iscsi: registered transport (tcp) Nov 23 23:10:02.031346 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:10:02.031392 kernel: QLogic iSCSI HBA Driver Nov 23 23:10:02.050738 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:10:02.068169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:10:02.070482 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:10:02.124562 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:10:02.127428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:10:02.201952 kernel: raid6: neonx8 gen() 14891 MB/s Nov 23 23:10:02.218938 kernel: raid6: neonx4 gen() 15703 MB/s Nov 23 23:10:02.235941 kernel: raid6: neonx2 gen() 13072 MB/s Nov 23 23:10:02.252950 kernel: raid6: neonx1 gen() 10227 MB/s Nov 23 23:10:02.269943 kernel: raid6: int64x8 gen() 5085 MB/s Nov 23 23:10:02.286938 kernel: raid6: int64x4 gen() 7303 MB/s Nov 23 23:10:02.303938 kernel: raid6: int64x2 gen() 5922 MB/s Nov 23 23:10:02.321267 kernel: raid6: int64x1 gen() 4993 MB/s Nov 23 23:10:02.321322 kernel: raid6: using algorithm neonx4 gen() 15703 MB/s Nov 23 23:10:02.339217 kernel: raid6: .... xor() 11897 MB/s, rmw enabled Nov 23 23:10:02.339266 kernel: raid6: using neon recovery algorithm Nov 23 23:10:02.345235 kernel: xor: measuring software checksum speed Nov 23 23:10:02.345265 kernel: 8regs : 21636 MB/sec Nov 23 23:10:02.345939 kernel: 32regs : 21584 MB/sec Nov 23 23:10:02.347302 kernel: arm64_neon : 27804 MB/sec Nov 23 23:10:02.347341 kernel: xor: using function: arm64_neon (27804 MB/sec) Nov 23 23:10:02.401006 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:10:02.407738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:10:02.411280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:10:02.450774 systemd-udevd[501]: Using default interface naming scheme 'v255'. Nov 23 23:10:02.456074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:10:02.458821 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:10:02.494317 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Nov 23 23:10:02.519508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:10:02.523085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:10:02.584700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:10:02.588005 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:10:02.634943 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 23:10:02.635354 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 23:10:02.643079 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:10:02.643139 kernel: GPT:9289727 != 19775487 Nov 23 23:10:02.643155 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:10:02.645688 kernel: GPT:9289727 != 19775487 Nov 23 23:10:02.645741 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:10:02.645753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:10:02.648234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:10:02.648351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:10:02.654339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:10:02.658167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:10:02.680969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 23:10:02.687649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 23:10:02.691882 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 23:10:02.694535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:10:02.696022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:10:02.714645 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 23:10:02.722258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:10:02.723642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:10:02.726054 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:10:02.728289 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:10:02.731339 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:10:02.733543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:10:02.759616 disk-uuid[592]: Primary Header is updated. Nov 23 23:10:02.759616 disk-uuid[592]: Secondary Entries is updated. Nov 23 23:10:02.759616 disk-uuid[592]: Secondary Header is updated. Nov 23 23:10:02.764430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:10:02.767768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:10:03.773962 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:10:03.774187 disk-uuid[597]: The operation has completed successfully. Nov 23 23:10:03.799206 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:10:03.799967 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:10:03.824244 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:10:03.845894 sh[612]: Success Nov 23 23:10:03.858545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:10:03.858595 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:10:03.859775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:10:03.868923 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:10:03.893765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:10:03.896635 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:10:03.913927 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:10:03.920638 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Nov 23 23:10:03.920661 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:10:03.920679 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:10:03.924920 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:10:03.924941 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:10:03.925954 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:10:03.927288 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:10:03.929145 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:10:03.929957 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:10:03.931594 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:10:03.957098 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Nov 23 23:10:03.957152 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:10:03.957163 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:10:03.961423 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:10:03.961494 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:10:03.965949 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:10:03.967482 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:10:03.969589 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:10:04.045123 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:10:04.048436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:10:04.070985 ignition[702]: Ignition 2.22.0 Nov 23 23:10:04.070999 ignition[702]: Stage: fetch-offline Nov 23 23:10:04.071040 ignition[702]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:04.071048 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:04.071130 ignition[702]: parsed url from cmdline: "" Nov 23 23:10:04.071133 ignition[702]: no config URL provided Nov 23 23:10:04.071137 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:10:04.071144 ignition[702]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:10:04.071166 ignition[702]: op(1): [started] loading QEMU firmware config module Nov 23 23:10:04.071171 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 23:10:04.083548 ignition[702]: op(1): [finished] loading QEMU firmware config module Nov 23 23:10:04.088303 systemd-networkd[808]: lo: Link UP Nov 23 23:10:04.088315 systemd-networkd[808]: lo: Gained carrier Nov 23 23:10:04.089019 systemd-networkd[808]: Enumeration completed Nov 23 23:10:04.089357 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:10:04.089435 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:10:04.089439 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:10:04.090470 systemd-networkd[808]: eth0: Link UP Nov 23 23:10:04.090557 systemd-networkd[808]: eth0: Gained carrier Nov 23 23:10:04.090566 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:10:04.091194 systemd[1]: Reached target network.target - Network. Nov 23 23:10:04.109962 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:10:04.138870 ignition[702]: parsing config with SHA512: 3177b9e83ea88960a16b4429a199a26a139c2c3d831e13e788e272153f998cf84f7a8e99daed557f80f27bfbcc5c29e102addf0288026cca4159acf75969f71c Nov 23 23:10:04.144428 unknown[702]: fetched base config from "system" Nov 23 23:10:04.144443 unknown[702]: fetched user config from "qemu" Nov 23 23:10:04.145086 ignition[702]: fetch-offline: fetch-offline passed Nov 23 23:10:04.145017 systemd-resolved[289]: Detected conflict on linux IN A 10.0.0.81 Nov 23 23:10:04.145147 ignition[702]: Ignition finished successfully Nov 23 23:10:04.145024 systemd-resolved[289]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Nov 23 23:10:04.147959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:10:04.149737 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 23:10:04.150608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:10:04.201246 ignition[816]: Ignition 2.22.0 Nov 23 23:10:04.201264 ignition[816]: Stage: kargs Nov 23 23:10:04.201426 ignition[816]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:04.201436 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:04.204485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:10:04.202242 ignition[816]: kargs: kargs passed Nov 23 23:10:04.202292 ignition[816]: Ignition finished successfully Nov 23 23:10:04.207090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:10:04.236698 ignition[824]: Ignition 2.22.0 Nov 23 23:10:04.236718 ignition[824]: Stage: disks Nov 23 23:10:04.236883 ignition[824]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:04.236893 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:04.239837 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:10:04.237682 ignition[824]: disks: disks passed Nov 23 23:10:04.241254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:10:04.237728 ignition[824]: Ignition finished successfully Nov 23 23:10:04.243271 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:10:04.245420 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:10:04.247587 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:10:04.249737 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:10:04.252702 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:10:04.295456 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:10:04.300733 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:10:04.305058 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:10:04.383943 kernel: EXT4-fs (vda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:10:04.384091 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:10:04.385391 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:10:04.388095 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:10:04.389940 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:10:04.391088 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:10:04.391133 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:10:04.391158 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:10:04.400674 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:10:04.405918 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (842) Nov 23 23:10:04.405026 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:10:04.410614 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:10:04.410635 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:10:04.412913 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:10:04.412932 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:10:04.414591 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:10:04.450316 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:10:04.455096 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:10:04.458947 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:10:04.461832 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:10:04.535970 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:10:04.538086 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:10:04.539655 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:10:04.557918 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:10:04.575065 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:10:04.590951 ignition[956]: INFO : Ignition 2.22.0 Nov 23 23:10:04.590951 ignition[956]: INFO : Stage: mount Nov 23 23:10:04.592667 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:04.592667 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:04.592667 ignition[956]: INFO : mount: mount passed Nov 23 23:10:04.592667 ignition[956]: INFO : Ignition finished successfully Nov 23 23:10:04.593408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:10:04.595815 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:10:04.920068 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:10:04.921535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:10:04.944873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Nov 23 23:10:04.944950 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:10:04.944961 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:10:04.949274 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:10:04.949325 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:10:04.952318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:10:04.993676 ignition[986]: INFO : Ignition 2.22.0 Nov 23 23:10:04.993676 ignition[986]: INFO : Stage: files Nov 23 23:10:04.995504 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:04.995504 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:04.995504 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:10:04.999101 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:10:04.999101 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:10:04.999101 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:10:04.999101 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:10:04.999101 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:10:04.999101 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:10:04.999101 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 23:10:04.997818 unknown[986]: wrote ssh authorized keys file for user: core Nov 23 23:10:05.182696 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:10:05.424005 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:10:05.427328 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:10:05.448152 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 23 23:10:05.551061 systemd-networkd[808]: eth0: Gained IPv6LL Nov 23 23:10:05.722345 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:10:06.105560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:10:06.105560 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:10:06.111949 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:10:06.115978 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:10:06.115978 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:10:06.115978 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:10:06.121223 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:10:06.121223 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:10:06.121223 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:10:06.121223 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 23:10:06.145833 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:10:06.153790 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:10:06.156842 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 23:10:06.156842 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:10:06.156842 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:10:06.156842 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:10:06.156842 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:10:06.156842 ignition[986]: INFO : files: files passed Nov 23 23:10:06.156842 ignition[986]: INFO : Ignition finished successfully Nov 23 23:10:06.157644 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:10:06.164558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:10:06.168404 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:10:06.182373 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:10:06.183448 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:10:06.185880 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 23:10:06.187311 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:10:06.187311 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:10:06.190725 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:10:06.190943 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:10:06.193815 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:10:06.196813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:10:06.244025 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:10:06.244168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:10:06.246709 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:10:06.248819 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:10:06.251154 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:10:06.252243 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:10:06.280718 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:10:06.283391 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:10:06.306048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:10:06.307478 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:10:06.310015 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:10:06.312042 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:10:06.312243 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:10:06.315224 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:10:06.317397 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:10:06.319242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:10:06.321212 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:10:06.323482 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:10:06.325736 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:10:06.328284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:10:06.330451 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:10:06.332741 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:10:06.335008 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:10:06.337142 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:10:06.338967 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:10:06.339130 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:10:06.341780 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:10:06.344237 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:10:06.346709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:10:06.349970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:10:06.351331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:10:06.351457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:10:06.354522 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:10:06.354666 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:10:06.356972 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:10:06.358697 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:10:06.358837 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:10:06.361055 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:10:06.362785 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:10:06.364777 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:10:06.364929 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:10:06.367084 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:10:06.367215 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:10:06.368972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:10:06.369108 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:10:06.371071 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:10:06.371180 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:10:06.373909 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:10:06.376660 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:10:06.378624 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:10:06.378750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:10:06.381340 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:10:06.381472 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:10:06.387244 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:10:06.387358 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:10:06.400143 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:10:06.407159 ignition[1041]: INFO : Ignition 2.22.0 Nov 23 23:10:06.407159 ignition[1041]: INFO : Stage: umount Nov 23 23:10:06.410991 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:10:06.410991 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:10:06.410991 ignition[1041]: INFO : umount: umount passed Nov 23 23:10:06.410991 ignition[1041]: INFO : Ignition finished successfully Nov 23 23:10:06.408060 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:10:06.408975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:10:06.414082 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:10:06.414974 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:10:06.417077 systemd[1]: Stopped target network.target - Network. Nov 23 23:10:06.418593 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:10:06.418657 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:10:06.420652 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:10:06.420703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:10:06.422546 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:10:06.422597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:10:06.424233 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:10:06.424276 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:10:06.426037 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:10:06.426089 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:10:06.428131 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:10:06.429877 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:10:06.434873 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:10:06.435012 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:10:06.439434 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:10:06.439697 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:10:06.439735 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:10:06.444672 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:10:06.444969 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:10:06.445078 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:10:06.451881 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:10:06.452346 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:10:06.454629 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:10:06.454667 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:10:06.458621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:10:06.459808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:10:06.459875 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:10:06.462076 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:10:06.462119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:10:06.465099 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:10:06.465143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:10:06.467211 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:10:06.471507 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:10:06.480622 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:10:06.481052 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:10:06.482516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:10:06.482551 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:10:06.484161 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:10:06.484191 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:10:06.486331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:10:06.486383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:10:06.489306 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:10:06.489355 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:10:06.492052 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:10:06.492102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:10:06.495029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:10:06.496282 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:10:06.496340 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:10:06.499288 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:10:06.499333 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:10:06.502631 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:10:06.502672 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:10:06.505937 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:10:06.505979 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:10:06.508545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:10:06.508594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:10:06.512304 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:10:06.513041 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:10:06.518307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:10:06.518408 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:10:06.519964 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:10:06.522561 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:10:06.539724 systemd[1]: Switching root. Nov 23 23:10:06.575603 systemd-journald[245]: Journal stopped Nov 23 23:10:07.334297 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 23:10:07.334355 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:10:07.334367 kernel: SELinux: policy capability open_perms=1 Nov 23 23:10:07.334380 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:10:07.334390 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:10:07.334398 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:10:07.334408 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:10:07.334421 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:10:07.334432 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:10:07.334442 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:10:07.334452 kernel: audit: type=1403 audit(1763939406.740:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:10:07.334462 systemd[1]: Successfully loaded SELinux policy in 53.386ms. Nov 23 23:10:07.334481 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.359ms. Nov 23 23:10:07.334492 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:10:07.334503 systemd[1]: Detected virtualization kvm. Nov 23 23:10:07.334513 systemd[1]: Detected architecture arm64. Nov 23 23:10:07.334529 systemd[1]: Detected first boot. Nov 23 23:10:07.334539 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:10:07.334549 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:10:07.334559 zram_generator::config[1089]: No configuration found. Nov 23 23:10:07.334570 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:10:07.334581 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:10:07.334591 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:10:07.334601 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:10:07.334611 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:10:07.334622 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:10:07.334634 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:10:07.334645 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:10:07.334657 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:10:07.334668 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:10:07.334679 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:10:07.334690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:10:07.334700 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:10:07.334714 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:10:07.334725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:10:07.334737 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:10:07.334747 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:10:07.334757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:10:07.334768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:10:07.334779 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:10:07.334790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:10:07.334800 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:10:07.334811 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:10:07.334823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:10:07.334833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:10:07.334843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:10:07.334863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:10:07.334876 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:10:07.334886 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:10:07.334975 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:10:07.334989 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:10:07.335000 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:10:07.335014 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:10:07.335024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:10:07.335034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:10:07.335044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:10:07.335097 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:10:07.335109 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:10:07.335119 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:10:07.335129 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:10:07.335138 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:10:07.335151 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:10:07.335161 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:10:07.335171 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:10:07.335182 systemd[1]: Reached target machines.target - Containers. Nov 23 23:10:07.335192 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:10:07.335202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:10:07.335212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:10:07.335223 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:10:07.335235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:10:07.335245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:10:07.335256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:10:07.335267 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:10:07.335277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:10:07.335288 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:10:07.335299 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:10:07.335311 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:10:07.335322 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:10:07.335334 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:10:07.335346 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:10:07.335356 kernel: fuse: init (API version 7.41) Nov 23 23:10:07.335367 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:10:07.335377 kernel: loop: module loaded Nov 23 23:10:07.335387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:10:07.335397 kernel: ACPI: bus type drm_connector registered Nov 23 23:10:07.335407 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:10:07.335418 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:10:07.335430 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:10:07.335440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:10:07.335475 systemd-journald[1160]: Collecting audit messages is disabled. Nov 23 23:10:07.335497 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:10:07.335508 systemd[1]: Stopped verity-setup.service. Nov 23 23:10:07.335519 systemd-journald[1160]: Journal started Nov 23 23:10:07.335539 systemd-journald[1160]: Runtime Journal (/run/log/journal/98bdf99479734d7daabce7a1234b260f) is 6M, max 48.5M, 42.4M free. Nov 23 23:10:07.106504 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:10:07.131024 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 23:10:07.131413 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:10:07.340204 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:10:07.340951 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:10:07.342308 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:10:07.343630 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:10:07.344840 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:10:07.346178 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:10:07.347506 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:10:07.349934 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:10:07.351570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:10:07.353237 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:10:07.354028 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:10:07.355559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:10:07.355740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:10:07.357312 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:10:07.357481 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:10:07.359164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:10:07.359348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:10:07.361080 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:10:07.361248 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:10:07.363177 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:10:07.363362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:10:07.364867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:10:07.366431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:10:07.368252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:10:07.370116 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:10:07.383008 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:10:07.386873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:10:07.389957 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:10:07.391179 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:10:07.391225 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:10:07.393226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:10:07.395796 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:10:07.397186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:10:07.398983 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:10:07.401445 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:10:07.402974 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:10:07.405091 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:10:07.406770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:10:07.410263 systemd-journald[1160]: Time spent on flushing to /var/log/journal/98bdf99479734d7daabce7a1234b260f is 11.843ms for 881 entries. Nov 23 23:10:07.410263 systemd-journald[1160]: System Journal (/var/log/journal/98bdf99479734d7daabce7a1234b260f) is 8M, max 195.6M, 187.6M free. Nov 23 23:10:07.433052 systemd-journald[1160]: Received client request to flush runtime journal. Nov 23 23:10:07.408628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:10:07.413178 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:10:07.425573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:10:07.431964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:10:07.434379 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:10:07.438718 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:10:07.442917 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 23:10:07.442916 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:10:07.445172 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:10:07.450593 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:10:07.451516 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 23 23:10:07.451531 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Nov 23 23:10:07.456079 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:10:07.457815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:10:07.459734 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:10:07.464031 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:10:07.464200 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:10:07.477932 kernel: loop1: detected capacity change from 0 to 100632 Nov 23 23:10:07.498965 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:10:07.502103 kernel: loop2: detected capacity change from 0 to 211168 Nov 23 23:10:07.504953 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:10:07.509761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:10:07.534933 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:10:07.538457 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Nov 23 23:10:07.538779 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Nov 23 23:10:07.543314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:10:07.545920 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:10:07.561923 kernel: loop5: detected capacity change from 0 to 211168 Nov 23 23:10:07.567242 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 23:10:07.567652 (sd-merge)[1229]: Merged extensions into '/usr'. Nov 23 23:10:07.572528 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:10:07.572546 systemd[1]: Reloading... Nov 23 23:10:07.621334 zram_generator::config[1255]: No configuration found. Nov 23 23:10:07.721409 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:10:07.775478 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:10:07.775640 systemd[1]: Reloading finished in 202 ms. Nov 23 23:10:07.811589 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:10:07.813168 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:10:07.829148 systemd[1]: Starting ensure-sysext.service... Nov 23 23:10:07.831167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:10:07.844008 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:10:07.844024 systemd[1]: Reloading... Nov 23 23:10:07.845128 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:10:07.845167 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:10:07.845395 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:10:07.845576 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:10:07.846655 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:10:07.846992 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 23 23:10:07.847120 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Nov 23 23:10:07.850489 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:10:07.850593 systemd-tmpfiles[1291]: Skipping /boot Nov 23 23:10:07.856824 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:10:07.856940 systemd-tmpfiles[1291]: Skipping /boot Nov 23 23:10:07.892941 zram_generator::config[1318]: No configuration found. Nov 23 23:10:08.031283 systemd[1]: Reloading finished in 186 ms. Nov 23 23:10:08.054601 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:10:08.061974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:10:08.074117 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:10:08.077114 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:10:08.088946 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:10:08.092884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:10:08.098107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:10:08.101213 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:10:08.109832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:10:08.111461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:10:08.121688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:10:08.124333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:10:08.126157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:10:08.126405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:10:08.130297 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:10:08.135577 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:10:08.139231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:10:08.139408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:10:08.142274 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:10:08.142598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:10:08.144556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:10:08.144821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:10:08.155315 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Nov 23 23:10:08.156048 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:10:08.163836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:10:08.165296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:10:08.167662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:10:08.177239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:10:08.181154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:10:08.182434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:10:08.182558 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:10:08.183860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:10:08.187270 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:10:08.192428 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:10:08.194455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:10:08.197473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:10:08.197657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:10:08.199310 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:10:08.199969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:10:08.202658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:10:08.203055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:10:08.203430 augenrules[1406]: No rules Nov 23 23:10:08.205986 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:10:08.206165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:10:08.208090 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:10:08.208282 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:10:08.211549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:10:08.217891 systemd[1]: Finished ensure-sysext.service. Nov 23 23:10:08.236239 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:10:08.237389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:10:08.237490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:10:08.239239 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:10:08.241978 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:10:08.250398 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:10:08.311389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:10:08.314703 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:10:08.337317 systemd-resolved[1357]: Positive Trust Anchors: Nov 23 23:10:08.337616 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:10:08.337651 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:10:08.347710 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:10:08.350812 systemd-resolved[1357]: Defaulting to hostname 'linux'. Nov 23 23:10:08.352222 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:10:08.354139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:10:08.357665 systemd-networkd[1438]: lo: Link UP Nov 23 23:10:08.357974 systemd-networkd[1438]: lo: Gained carrier Nov 23 23:10:08.358812 systemd-networkd[1438]: Enumeration completed Nov 23 23:10:08.359209 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:10:08.359522 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:10:08.359600 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:10:08.360391 systemd-networkd[1438]: eth0: Link UP Nov 23 23:10:08.360621 systemd-networkd[1438]: eth0: Gained carrier Nov 23 23:10:08.360697 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:10:08.361278 systemd[1]: Reached target network.target - Network. Nov 23 23:10:08.363706 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:10:08.366491 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:10:08.368426 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:10:08.369954 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:10:08.371227 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:10:08.373069 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:10:08.374014 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:10:08.374446 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:10:08.375042 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Nov 23 23:10:08.375791 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:10:08.375831 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:10:08.376371 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 23:10:08.376486 systemd-timesyncd[1439]: Initial clock synchronization to Sun 2025-11-23 23:10:08.408023 UTC. Nov 23 23:10:08.377439 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:10:08.378748 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:10:08.380104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:10:08.381514 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:10:08.383316 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:10:08.385948 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:10:08.388508 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:10:08.392070 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:10:08.393457 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:10:08.407729 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:10:08.409635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:10:08.411998 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:10:08.413601 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:10:08.415484 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:10:08.417018 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:10:08.418173 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:10:08.418207 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:10:08.419487 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:10:08.421596 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:10:08.423937 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:10:08.428058 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:10:08.446168 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:10:08.447375 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:10:08.448539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:10:08.451911 jq[1476]: false Nov 23 23:10:08.452085 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:10:08.454192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:10:08.456957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:10:08.462029 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:10:08.464818 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:10:08.465344 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:10:08.467369 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:10:08.471103 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:10:08.473958 extend-filesystems[1477]: Found /dev/vda6 Nov 23 23:10:08.475721 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:10:08.477973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:10:08.483637 jq[1492]: true Nov 23 23:10:08.483808 extend-filesystems[1477]: Found /dev/vda9 Nov 23 23:10:08.483808 extend-filesystems[1477]: Checking size of /dev/vda9 Nov 23 23:10:08.479189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:10:08.481200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:10:08.481378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:10:08.499100 update_engine[1488]: I20251123 23:10:08.495442 1488 main.cc:92] Flatcar Update Engine starting Nov 23 23:10:08.497559 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:10:08.497812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:10:08.501550 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:10:08.514018 tar[1498]: linux-arm64/LICENSE Nov 23 23:10:08.514018 tar[1498]: linux-arm64/helm Nov 23 23:10:08.514336 extend-filesystems[1477]: Resized partition /dev/vda9 Nov 23 23:10:08.523971 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 23:10:08.524008 extend-filesystems[1515]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:10:08.532115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:10:08.546384 dbus-daemon[1474]: [system] SELinux support is enabled Nov 23 23:10:08.546789 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:10:08.551302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:10:08.553502 update_engine[1488]: I20251123 23:10:08.552192 1488 update_check_scheduler.cc:74] Next update check in 7m8s Nov 23 23:10:08.551327 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:10:08.553780 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:10:08.558507 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 23:10:08.553965 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:10:08.557285 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:10:08.576805 jq[1500]: true Nov 23 23:10:08.562094 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:10:08.577257 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 23:10:08.577257 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 23:10:08.577257 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 23:10:08.585044 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Nov 23 23:10:08.579106 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:10:08.582347 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:10:08.613372 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:10:08.614517 systemd-logind[1483]: New seat seat0. Nov 23 23:10:08.631663 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:10:08.637555 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:10:08.639176 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:10:08.643187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:10:08.646409 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:10:08.650021 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:10:08.718728 containerd[1503]: time="2025-11-23T23:10:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:10:08.719761 containerd[1503]: time="2025-11-23T23:10:08.719708360Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:10:08.732470 containerd[1503]: time="2025-11-23T23:10:08.732416400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="110.28µs" Nov 23 23:10:08.732574 containerd[1503]: time="2025-11-23T23:10:08.732517560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:10:08.732574 containerd[1503]: time="2025-11-23T23:10:08.732544080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:10:08.732751 containerd[1503]: time="2025-11-23T23:10:08.732729360Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:10:08.732801 containerd[1503]: time="2025-11-23T23:10:08.732753480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:10:08.732801 containerd[1503]: time="2025-11-23T23:10:08.732780920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733055 containerd[1503]: time="2025-11-23T23:10:08.733022120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733055 containerd[1503]: time="2025-11-23T23:10:08.733048440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733559 containerd[1503]: time="2025-11-23T23:10:08.733450400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733584 containerd[1503]: time="2025-11-23T23:10:08.733558240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733584 containerd[1503]: time="2025-11-23T23:10:08.733573760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733584 containerd[1503]: time="2025-11-23T23:10:08.733582280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:10:08.733760 containerd[1503]: time="2025-11-23T23:10:08.733738600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:10:08.734335 containerd[1503]: time="2025-11-23T23:10:08.734311040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:10:08.734441 containerd[1503]: time="2025-11-23T23:10:08.734355400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:10:08.734441 containerd[1503]: time="2025-11-23T23:10:08.734438880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:10:08.734490 containerd[1503]: time="2025-11-23T23:10:08.734477240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:10:08.735097 containerd[1503]: time="2025-11-23T23:10:08.734831000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:10:08.735097 containerd[1503]: time="2025-11-23T23:10:08.734952480Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:10:08.740521 containerd[1503]: time="2025-11-23T23:10:08.740477160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:10:08.740671 containerd[1503]: time="2025-11-23T23:10:08.740657880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:10:08.740808 containerd[1503]: time="2025-11-23T23:10:08.740793720Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:10:08.740874 containerd[1503]: time="2025-11-23T23:10:08.740859880Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:10:08.740968 containerd[1503]: time="2025-11-23T23:10:08.740952720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:10:08.741019 containerd[1503]: time="2025-11-23T23:10:08.741006960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:10:08.741073 containerd[1503]: time="2025-11-23T23:10:08.741060480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:10:08.741126 containerd[1503]: time="2025-11-23T23:10:08.741113240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:10:08.741190 containerd[1503]: time="2025-11-23T23:10:08.741164960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:10:08.741239 containerd[1503]: time="2025-11-23T23:10:08.741227200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:10:08.741323 containerd[1503]: time="2025-11-23T23:10:08.741310280Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:10:08.741402 containerd[1503]: time="2025-11-23T23:10:08.741388080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:10:08.741620 containerd[1503]: time="2025-11-23T23:10:08.741598080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:10:08.741703 containerd[1503]: time="2025-11-23T23:10:08.741687080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:10:08.741761 containerd[1503]: time="2025-11-23T23:10:08.741749040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:10:08.741814 containerd[1503]: time="2025-11-23T23:10:08.741802320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741876920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741894320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741920400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741931960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741943560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741954720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.741965680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.742164360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:10:08.742206 containerd[1503]: time="2025-11-23T23:10:08.742178840Z" level=info msg="Start snapshots syncer" Nov 23 23:10:08.742480 containerd[1503]: time="2025-11-23T23:10:08.742459320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:10:08.743060 containerd[1503]: time="2025-11-23T23:10:08.743022000Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:10:08.743283 containerd[1503]: time="2025-11-23T23:10:08.743264760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:10:08.743418 containerd[1503]: time="2025-11-23T23:10:08.743403880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:10:08.743743 containerd[1503]: time="2025-11-23T23:10:08.743718520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:10:08.743821 containerd[1503]: time="2025-11-23T23:10:08.743806560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:10:08.743893 containerd[1503]: time="2025-11-23T23:10:08.743880080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:10:08.743982 containerd[1503]: time="2025-11-23T23:10:08.743966640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:10:08.744040 containerd[1503]: time="2025-11-23T23:10:08.744027440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:10:08.744094 containerd[1503]: time="2025-11-23T23:10:08.744080840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:10:08.744156 containerd[1503]: time="2025-11-23T23:10:08.744144360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:10:08.744232 containerd[1503]: time="2025-11-23T23:10:08.744214680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:10:08.744296 containerd[1503]: time="2025-11-23T23:10:08.744282800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:10:08.744361 containerd[1503]: time="2025-11-23T23:10:08.744347680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:10:08.744461 containerd[1503]: time="2025-11-23T23:10:08.744446440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:10:08.744586 containerd[1503]: time="2025-11-23T23:10:08.744569440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744625800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744644600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744653280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744663200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744674000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744753760Z" level=info msg="runtime interface created" Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744759320Z" level=info msg="created NRI interface" Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744767840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744780520Z" level=info msg="Connect containerd service" Nov 23 23:10:08.744837 containerd[1503]: time="2025-11-23T23:10:08.744811440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:10:08.745935 containerd[1503]: time="2025-11-23T23:10:08.745885840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:10:08.817670 containerd[1503]: time="2025-11-23T23:10:08.817606840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:10:08.817670 containerd[1503]: time="2025-11-23T23:10:08.817678440Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:10:08.817784 containerd[1503]: time="2025-11-23T23:10:08.817705720Z" level=info msg="Start subscribing containerd event" Nov 23 23:10:08.817784 containerd[1503]: time="2025-11-23T23:10:08.817748000Z" level=info msg="Start recovering state" Nov 23 23:10:08.817845 containerd[1503]: time="2025-11-23T23:10:08.817824640Z" level=info msg="Start event monitor" Nov 23 23:10:08.817868 containerd[1503]: time="2025-11-23T23:10:08.817850920Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:10:08.817868 containerd[1503]: time="2025-11-23T23:10:08.817860880Z" level=info msg="Start streaming server" Nov 23 23:10:08.817938 containerd[1503]: time="2025-11-23T23:10:08.817869440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:10:08.817938 containerd[1503]: time="2025-11-23T23:10:08.817876240Z" level=info msg="runtime interface starting up..." Nov 23 23:10:08.817938 containerd[1503]: time="2025-11-23T23:10:08.817881360Z" level=info msg="starting plugins..." Nov 23 23:10:08.817938 containerd[1503]: time="2025-11-23T23:10:08.817893280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:10:08.818140 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:10:08.819694 containerd[1503]: time="2025-11-23T23:10:08.819647040Z" level=info msg="containerd successfully booted in 0.105319s" Nov 23 23:10:08.853089 tar[1498]: linux-arm64/README.md Nov 23 23:10:08.873583 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:10:09.096547 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:10:09.115897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:10:09.118684 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:10:09.147700 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:10:09.147964 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:10:09.150720 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:10:09.186024 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:10:09.191014 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:10:09.193329 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:10:09.194873 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:10:10.095068 systemd-networkd[1438]: eth0: Gained IPv6LL Nov 23 23:10:10.098022 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:10:10.099967 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:10:10.102462 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 23:10:10.104857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:10.108139 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:10:10.131803 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 23:10:10.133041 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 23:10:10.134720 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:10:10.138372 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:10:10.679871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:10.681535 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:10:10.684525 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:10:10.686017 systemd[1]: Startup finished in 2.182s (kernel) + 5.129s (initrd) + 3.998s (userspace) = 11.310s. Nov 23 23:10:11.042497 kubelet[1613]: E1123 23:10:11.042373 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:10:11.045318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:10:11.045458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:10:11.045782 systemd[1]: kubelet.service: Consumed 748ms CPU time, 257.7M memory peak. Nov 23 23:10:14.402365 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:10:14.403412 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:55972.service - OpenSSH per-connection server daemon (10.0.0.1:55972). Nov 23 23:10:14.472295 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 55972 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:14.474856 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:14.480752 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:10:14.481798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:10:14.487517 systemd-logind[1483]: New session 1 of user core. Nov 23 23:10:14.503191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:10:14.506263 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:10:14.532089 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:10:14.534307 systemd-logind[1483]: New session c1 of user core. Nov 23 23:10:14.649933 systemd[1631]: Queued start job for default target default.target. Nov 23 23:10:14.667959 systemd[1631]: Created slice app.slice - User Application Slice. Nov 23 23:10:14.667992 systemd[1631]: Reached target paths.target - Paths. Nov 23 23:10:14.668033 systemd[1631]: Reached target timers.target - Timers. Nov 23 23:10:14.669269 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:10:14.678845 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:10:14.678932 systemd[1631]: Reached target sockets.target - Sockets. Nov 23 23:10:14.678975 systemd[1631]: Reached target basic.target - Basic System. Nov 23 23:10:14.679003 systemd[1631]: Reached target default.target - Main User Target. Nov 23 23:10:14.679030 systemd[1631]: Startup finished in 137ms. Nov 23 23:10:14.679167 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:10:14.680851 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:10:14.742165 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:55986.service - OpenSSH per-connection server daemon (10.0.0.1:55986). Nov 23 23:10:14.799722 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55986 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:14.801149 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:14.806068 systemd-logind[1483]: New session 2 of user core. Nov 23 23:10:14.823145 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:10:14.875298 sshd[1645]: Connection closed by 10.0.0.1 port 55986 Nov 23 23:10:14.875693 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:14.893553 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:55986.service: Deactivated successfully. Nov 23 23:10:14.898108 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:10:14.899495 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:10:14.903579 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:55998.service - OpenSSH per-connection server daemon (10.0.0.1:55998). Nov 23 23:10:14.904717 systemd-logind[1483]: Removed session 2. Nov 23 23:10:14.967087 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 55998 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:14.968756 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:14.973789 systemd-logind[1483]: New session 3 of user core. Nov 23 23:10:14.984127 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:10:15.033646 sshd[1654]: Connection closed by 10.0.0.1 port 55998 Nov 23 23:10:15.034677 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:15.045438 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:55998.service: Deactivated successfully. Nov 23 23:10:15.047462 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:10:15.048402 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:10:15.054336 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:56006.service - OpenSSH per-connection server daemon (10.0.0.1:56006). Nov 23 23:10:15.055145 systemd-logind[1483]: Removed session 3. Nov 23 23:10:15.110850 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 56006 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:15.112778 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:15.117827 systemd-logind[1483]: New session 4 of user core. Nov 23 23:10:15.131126 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:10:15.184965 sshd[1663]: Connection closed by 10.0.0.1 port 56006 Nov 23 23:10:15.185439 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:15.200249 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:56006.service: Deactivated successfully. Nov 23 23:10:15.201976 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:10:15.202686 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:10:15.209197 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). Nov 23 23:10:15.210243 systemd-logind[1483]: Removed session 4. Nov 23 23:10:15.262047 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:15.264030 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:15.268998 systemd-logind[1483]: New session 5 of user core. Nov 23 23:10:15.278096 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:10:15.335351 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:10:15.335650 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:10:15.348952 sudo[1673]: pam_unix(sudo:session): session closed for user root Nov 23 23:10:15.350833 sshd[1672]: Connection closed by 10.0.0.1 port 56014 Nov 23 23:10:15.351415 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:15.371503 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:56014.service: Deactivated successfully. Nov 23 23:10:15.374642 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:10:15.375632 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:10:15.378804 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:56024.service - OpenSSH per-connection server daemon (10.0.0.1:56024). Nov 23 23:10:15.380633 systemd-logind[1483]: Removed session 5. Nov 23 23:10:15.455282 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 56024 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:15.456708 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:15.463960 systemd-logind[1483]: New session 6 of user core. Nov 23 23:10:15.480150 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:10:15.533650 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:10:15.534003 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:10:15.614667 sudo[1684]: pam_unix(sudo:session): session closed for user root Nov 23 23:10:15.620526 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:10:15.620792 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:10:15.632176 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:10:15.677547 augenrules[1706]: No rules Nov 23 23:10:15.679035 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:10:15.680963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:10:15.682472 sudo[1683]: pam_unix(sudo:session): session closed for user root Nov 23 23:10:15.684262 sshd[1682]: Connection closed by 10.0.0.1 port 56024 Nov 23 23:10:15.684773 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:15.694395 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:56024.service: Deactivated successfully. Nov 23 23:10:15.697738 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:10:15.700198 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:10:15.703224 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Nov 23 23:10:15.704408 systemd-logind[1483]: Removed session 6. Nov 23 23:10:15.758254 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:10:15.760861 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:10:15.766072 systemd-logind[1483]: New session 7 of user core. Nov 23 23:10:15.775097 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:10:15.828198 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:10:15.828528 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:10:16.131709 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:10:16.160342 (dockerd)[1741]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:10:16.373845 dockerd[1741]: time="2025-11-23T23:10:16.373766586Z" level=info msg="Starting up" Nov 23 23:10:16.375592 dockerd[1741]: time="2025-11-23T23:10:16.375558613Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:10:16.388079 dockerd[1741]: time="2025-11-23T23:10:16.387973720Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:10:16.424975 dockerd[1741]: time="2025-11-23T23:10:16.424881774Z" level=info msg="Loading containers: start." Nov 23 23:10:16.433942 kernel: Initializing XFRM netlink socket Nov 23 23:10:16.671061 systemd-networkd[1438]: docker0: Link UP Nov 23 23:10:16.675107 dockerd[1741]: time="2025-11-23T23:10:16.675057239Z" level=info msg="Loading containers: done." Nov 23 23:10:16.690262 dockerd[1741]: time="2025-11-23T23:10:16.690197193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:10:16.690405 dockerd[1741]: time="2025-11-23T23:10:16.690303167Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:10:16.690405 dockerd[1741]: time="2025-11-23T23:10:16.690395444Z" level=info msg="Initializing buildkit" Nov 23 23:10:16.716478 dockerd[1741]: time="2025-11-23T23:10:16.716428018Z" level=info msg="Completed buildkit initialization" Nov 23 23:10:16.725092 dockerd[1741]: time="2025-11-23T23:10:16.725023132Z" level=info msg="Daemon has completed initialization" Nov 23 23:10:16.725291 dockerd[1741]: time="2025-11-23T23:10:16.725121417Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:10:16.725362 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:10:17.282150 containerd[1503]: time="2025-11-23T23:10:17.282110913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 23 23:10:17.401850 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4144518110-merged.mount: Deactivated successfully. Nov 23 23:10:17.853881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377024495.mount: Deactivated successfully. Nov 23 23:10:18.828279 containerd[1503]: time="2025-11-23T23:10:18.828215648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:18.829655 containerd[1503]: time="2025-11-23T23:10:18.829408174Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=27385706" Nov 23 23:10:18.830605 containerd[1503]: time="2025-11-23T23:10:18.830573630Z" level=info msg="ImageCreate event name:\"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:18.833255 containerd[1503]: time="2025-11-23T23:10:18.833208000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:18.834413 containerd[1503]: time="2025-11-23T23:10:18.834372015Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"27382303\" in 1.552215609s" Nov 23 23:10:18.834413 containerd[1503]: time="2025-11-23T23:10:18.834416504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\"" Nov 23 23:10:18.835845 containerd[1503]: time="2025-11-23T23:10:18.835816461Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 23 23:10:19.913969 containerd[1503]: time="2025-11-23T23:10:19.913370274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:19.914315 containerd[1503]: time="2025-11-23T23:10:19.914014907Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=23551826" Nov 23 23:10:19.915257 containerd[1503]: time="2025-11-23T23:10:19.915189011Z" level=info msg="ImageCreate event name:\"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:19.918641 containerd[1503]: time="2025-11-23T23:10:19.918576102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:19.919437 containerd[1503]: time="2025-11-23T23:10:19.919228663Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"25136308\" in 1.083379406s" Nov 23 23:10:19.919437 containerd[1503]: time="2025-11-23T23:10:19.919261017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\"" Nov 23 23:10:19.919823 containerd[1503]: time="2025-11-23T23:10:19.919795894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 23 23:10:21.033720 containerd[1503]: time="2025-11-23T23:10:21.033656561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:21.034358 containerd[1503]: time="2025-11-23T23:10:21.034323012Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=18296698" Nov 23 23:10:21.035759 containerd[1503]: time="2025-11-23T23:10:21.035719972Z" level=info msg="ImageCreate event name:\"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:21.038350 containerd[1503]: time="2025-11-23T23:10:21.038310186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:21.039522 containerd[1503]: time="2025-11-23T23:10:21.039479778Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"19881198\" in 1.119647247s" Nov 23 23:10:21.039522 containerd[1503]: time="2025-11-23T23:10:21.039519375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\"" Nov 23 23:10:21.040075 containerd[1503]: time="2025-11-23T23:10:21.039966304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 23 23:10:21.295890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:10:21.298461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:21.462668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:21.466577 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:10:21.508105 kubelet[2034]: E1123 23:10:21.508043 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:10:21.512972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:10:21.513157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:10:21.514313 systemd[1]: kubelet.service: Consumed 161ms CPU time, 106.4M memory peak. Nov 23 23:10:22.054234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550338542.mount: Deactivated successfully. Nov 23 23:10:22.448273 containerd[1503]: time="2025-11-23T23:10:22.448139532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:22.449398 containerd[1503]: time="2025-11-23T23:10:22.449241519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=28257771" Nov 23 23:10:22.450561 containerd[1503]: time="2025-11-23T23:10:22.450471936Z" level=info msg="ImageCreate event name:\"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:22.453014 containerd[1503]: time="2025-11-23T23:10:22.452883008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:22.453937 containerd[1503]: time="2025-11-23T23:10:22.453814168Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"28256788\" in 1.413816556s" Nov 23 23:10:22.453937 containerd[1503]: time="2025-11-23T23:10:22.453859046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\"" Nov 23 23:10:22.455572 containerd[1503]: time="2025-11-23T23:10:22.455541332Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 23 23:10:22.981023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214431500.mount: Deactivated successfully. Nov 23 23:10:23.729934 containerd[1503]: time="2025-11-23T23:10:23.729863841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:23.731446 containerd[1503]: time="2025-11-23T23:10:23.731398638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Nov 23 23:10:23.732526 containerd[1503]: time="2025-11-23T23:10:23.732490918Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:23.736097 containerd[1503]: time="2025-11-23T23:10:23.736046702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:23.739922 containerd[1503]: time="2025-11-23T23:10:23.739058529Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.283260897s" Nov 23 23:10:23.739922 containerd[1503]: time="2025-11-23T23:10:23.739120178Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 23 23:10:23.740337 containerd[1503]: time="2025-11-23T23:10:23.740300009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:10:24.178109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335217700.mount: Deactivated successfully. Nov 23 23:10:24.186137 containerd[1503]: time="2025-11-23T23:10:24.185424495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:10:24.186137 containerd[1503]: time="2025-11-23T23:10:24.185922751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 23:10:24.186869 containerd[1503]: time="2025-11-23T23:10:24.186839083Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:10:24.189061 containerd[1503]: time="2025-11-23T23:10:24.189024214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:10:24.189712 containerd[1503]: time="2025-11-23T23:10:24.189681110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.337386ms" Nov 23 23:10:24.189806 containerd[1503]: time="2025-11-23T23:10:24.189792514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:10:24.190336 containerd[1503]: time="2025-11-23T23:10:24.190307903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 23 23:10:24.700491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873713328.mount: Deactivated successfully. Nov 23 23:10:26.457847 containerd[1503]: time="2025-11-23T23:10:26.457734047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:26.460953 containerd[1503]: time="2025-11-23T23:10:26.460893104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Nov 23 23:10:26.462639 containerd[1503]: time="2025-11-23T23:10:26.462560251Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:26.466180 containerd[1503]: time="2025-11-23T23:10:26.466107286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:26.467225 containerd[1503]: time="2025-11-23T23:10:26.467078771Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.276736243s" Nov 23 23:10:26.467225 containerd[1503]: time="2025-11-23T23:10:26.467118597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 23 23:10:31.763506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:10:31.765121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:31.949146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:31.962363 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:10:32.010604 kubelet[2191]: E1123 23:10:32.010516 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:10:32.013439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:10:32.013913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:10:32.017247 systemd[1]: kubelet.service: Consumed 161ms CPU time, 107.7M memory peak. Nov 23 23:10:33.304316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:33.304614 systemd[1]: kubelet.service: Consumed 161ms CPU time, 107.7M memory peak. Nov 23 23:10:33.306738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:33.328466 systemd[1]: Reload requested from client PID 2207 ('systemctl') (unit session-7.scope)... Nov 23 23:10:33.328482 systemd[1]: Reloading... Nov 23 23:10:33.405067 zram_generator::config[2249]: No configuration found. Nov 23 23:10:33.659028 systemd[1]: Reloading finished in 330 ms. Nov 23 23:10:33.704358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:33.706804 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:33.708323 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:10:33.708552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:33.708593 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.1M memory peak. Nov 23 23:10:33.710215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:33.865605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:33.869782 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:10:33.914716 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:10:33.914716 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:10:33.914716 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:10:33.914716 kubelet[2297]: I1123 23:10:33.914691 2297 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:10:34.308001 kubelet[2297]: I1123 23:10:34.307655 2297 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:10:34.308001 kubelet[2297]: I1123 23:10:34.307689 2297 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:10:34.308001 kubelet[2297]: I1123 23:10:34.307954 2297 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:10:34.334229 kubelet[2297]: E1123 23:10:34.334183 2297 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 23:10:34.334772 kubelet[2297]: I1123 23:10:34.334751 2297 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:10:34.349572 kubelet[2297]: I1123 23:10:34.349531 2297 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:10:34.353946 kubelet[2297]: I1123 23:10:34.353918 2297 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:10:34.355159 kubelet[2297]: I1123 23:10:34.355104 2297 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:10:34.355338 kubelet[2297]: I1123 23:10:34.355155 2297 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:10:34.355441 kubelet[2297]: I1123 23:10:34.355401 2297 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:10:34.355441 kubelet[2297]: I1123 23:10:34.355411 2297 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:10:34.355683 kubelet[2297]: I1123 23:10:34.355645 2297 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:10:34.358762 kubelet[2297]: I1123 23:10:34.358729 2297 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:10:34.358762 kubelet[2297]: I1123 23:10:34.358759 2297 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:10:34.358861 kubelet[2297]: I1123 23:10:34.358783 2297 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:10:34.360082 kubelet[2297]: I1123 23:10:34.359962 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:10:34.361582 kubelet[2297]: I1123 23:10:34.361146 2297 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:10:34.362003 kubelet[2297]: I1123 23:10:34.361894 2297 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:10:34.362048 kubelet[2297]: W1123 23:10:34.362039 2297 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:10:34.363241 kubelet[2297]: E1123 23:10:34.363210 2297 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:10:34.364430 kubelet[2297]: I1123 23:10:34.364396 2297 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:10:34.368871 kubelet[2297]: I1123 23:10:34.364444 2297 server.go:1289] "Started kubelet" Nov 23 23:10:34.368871 kubelet[2297]: E1123 23:10:34.366287 2297 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 23:10:34.368871 kubelet[2297]: I1123 23:10:34.366384 2297 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:10:34.368871 kubelet[2297]: I1123 23:10:34.367691 2297 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:10:34.371689 kubelet[2297]: I1123 23:10:34.370964 2297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:10:34.371689 kubelet[2297]: I1123 23:10:34.371261 2297 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:10:34.372908 kubelet[2297]: E1123 23:10:34.369956 2297 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac591de9826b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:10:34.36441362 +0000 UTC m=+0.491438683,LastTimestamp:2025-11-23 23:10:34.36441362 +0000 UTC m=+0.491438683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:10:34.373102 kubelet[2297]: E1123 23:10:34.373052 2297 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:10:34.373716 kubelet[2297]: I1123 23:10:34.373679 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.373892 2297 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.373924 2297 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.374141 2297 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.374185 2297 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:10:34.375985 kubelet[2297]: E1123 23:10:34.374583 2297 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:10:34.375985 kubelet[2297]: E1123 23:10:34.374811 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Nov 23 23:10:34.375985 kubelet[2297]: E1123 23:10:34.373894 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.375667 2297 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:10:34.375985 kubelet[2297]: I1123 23:10:34.375908 2297 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:10:34.378225 kubelet[2297]: I1123 23:10:34.377153 2297 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:10:34.399032 kubelet[2297]: I1123 23:10:34.398986 2297 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:10:34.400204 kubelet[2297]: I1123 23:10:34.400182 2297 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:10:34.400204 kubelet[2297]: I1123 23:10:34.400212 2297 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:10:34.400560 kubelet[2297]: I1123 23:10:34.400231 2297 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:10:34.400560 kubelet[2297]: I1123 23:10:34.400239 2297 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:10:34.400560 kubelet[2297]: E1123 23:10:34.400284 2297 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:10:34.402381 kubelet[2297]: E1123 23:10:34.402333 2297 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:10:34.402448 kubelet[2297]: I1123 23:10:34.402425 2297 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:10:34.402448 kubelet[2297]: I1123 23:10:34.402434 2297 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:10:34.402520 kubelet[2297]: I1123 23:10:34.402452 2297 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:10:34.475116 kubelet[2297]: E1123 23:10:34.475073 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:10:34.489218 kubelet[2297]: I1123 23:10:34.489171 2297 policy_none.go:49] "None policy: Start" Nov 23 23:10:34.489218 kubelet[2297]: I1123 23:10:34.489209 2297 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:10:34.489218 kubelet[2297]: I1123 23:10:34.489223 2297 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:10:34.495359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:10:34.501351 kubelet[2297]: E1123 23:10:34.501318 2297 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 23:10:34.508125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:10:34.511302 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:10:34.525125 kubelet[2297]: E1123 23:10:34.524944 2297 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:10:34.525226 kubelet[2297]: I1123 23:10:34.525175 2297 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:10:34.525226 kubelet[2297]: I1123 23:10:34.525185 2297 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:10:34.525504 kubelet[2297]: I1123 23:10:34.525489 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:10:34.526915 kubelet[2297]: E1123 23:10:34.526818 2297 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:10:34.526915 kubelet[2297]: E1123 23:10:34.526873 2297 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 23:10:34.577092 kubelet[2297]: E1123 23:10:34.575729 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Nov 23 23:10:34.627280 kubelet[2297]: I1123 23:10:34.627238 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:10:34.627742 kubelet[2297]: E1123 23:10:34.627696 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 23 23:10:34.724758 systemd[1]: Created slice kubepods-burstable-podbed01379e0793ed5be881848a6990c96.slice - libcontainer container kubepods-burstable-podbed01379e0793ed5be881848a6990c96.slice. Nov 23 23:10:34.751318 kubelet[2297]: E1123 23:10:34.751261 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:34.753970 systemd[1]: Created slice kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice - libcontainer container kubepods-burstable-pod1d5832191310254249cf17c2353d71ec.slice. Nov 23 23:10:34.773543 kubelet[2297]: E1123 23:10:34.773297 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:34.775624 systemd[1]: Created slice kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice - libcontainer container kubepods-burstable-pode51b49401d7e125d16957469facd7352.slice. Nov 23 23:10:34.776204 kubelet[2297]: I1123 23:10:34.776156 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:34.776270 kubelet[2297]: I1123 23:10:34.776194 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:34.776270 kubelet[2297]: I1123 23:10:34.776226 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:34.776270 kubelet[2297]: I1123 23:10:34.776249 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:34.776270 kubelet[2297]: I1123 23:10:34.776265 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:34.776356 kubelet[2297]: I1123 23:10:34.776289 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:10:34.776356 kubelet[2297]: I1123 23:10:34.776304 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:34.776356 kubelet[2297]: I1123 23:10:34.776320 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:34.776356 kubelet[2297]: I1123 23:10:34.776340 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:34.777551 kubelet[2297]: E1123 23:10:34.777501 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:34.829804 kubelet[2297]: I1123 23:10:34.829711 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:10:34.830095 kubelet[2297]: E1123 23:10:34.830068 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 23 23:10:34.976883 kubelet[2297]: E1123 23:10:34.976804 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Nov 23 23:10:35.053027 containerd[1503]: time="2025-11-23T23:10:35.052986257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bed01379e0793ed5be881848a6990c96,Namespace:kube-system,Attempt:0,}" Nov 23 23:10:35.075501 containerd[1503]: time="2025-11-23T23:10:35.075394782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,}" Nov 23 23:10:35.075956 containerd[1503]: time="2025-11-23T23:10:35.075919697Z" level=info msg="connecting to shim 74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0" address="unix:///run/containerd/s/18e59d34f18e049137b5d400e68ba930cbc3c091415129d55656476fbc8ece93" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:10:35.078779 containerd[1503]: time="2025-11-23T23:10:35.078584647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,}" Nov 23 23:10:35.099295 systemd[1]: Started cri-containerd-74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0.scope - libcontainer container 74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0. Nov 23 23:10:35.111057 containerd[1503]: time="2025-11-23T23:10:35.111005452Z" level=info msg="connecting to shim 738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4" address="unix:///run/containerd/s/d2e40f8f1cd07470feda3cb3e466d2ca35dbd227ccf54d278cac5b169aaf9ca8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:10:35.112548 containerd[1503]: time="2025-11-23T23:10:35.112506730Z" level=info msg="connecting to shim 3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c" address="unix:///run/containerd/s/c12c8eab64fe46cc89fc95d7896f2fa70ed1ecec491c4c443a64c945a01829c1" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:10:35.146169 systemd[1]: Started cri-containerd-738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4.scope - libcontainer container 738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4. Nov 23 23:10:35.150882 systemd[1]: Started cri-containerd-3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c.scope - libcontainer container 3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c. Nov 23 23:10:35.165282 containerd[1503]: time="2025-11-23T23:10:35.165159571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bed01379e0793ed5be881848a6990c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0\"" Nov 23 23:10:35.182690 containerd[1503]: time="2025-11-23T23:10:35.182623139Z" level=info msg="CreateContainer within sandbox \"74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:10:35.202980 containerd[1503]: time="2025-11-23T23:10:35.202937366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1d5832191310254249cf17c2353d71ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4\"" Nov 23 23:10:35.205456 containerd[1503]: time="2025-11-23T23:10:35.205416688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e51b49401d7e125d16957469facd7352,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c\"" Nov 23 23:10:35.208157 containerd[1503]: time="2025-11-23T23:10:35.208115530Z" level=info msg="CreateContainer within sandbox \"738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:10:35.208456 containerd[1503]: time="2025-11-23T23:10:35.208138979Z" level=info msg="Container 84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:10:35.211034 containerd[1503]: time="2025-11-23T23:10:35.210996000Z" level=info msg="CreateContainer within sandbox \"3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:10:35.216553 containerd[1503]: time="2025-11-23T23:10:35.216507528Z" level=info msg="Container b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:10:35.228645 containerd[1503]: time="2025-11-23T23:10:35.228569009Z" level=info msg="CreateContainer within sandbox \"74259d3796affd20c2ddecb9be3cdde2c5d8b49c35e53e627c97db5b0fdcb0c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce\"" Nov 23 23:10:35.229439 containerd[1503]: time="2025-11-23T23:10:35.229414603Z" level=info msg="StartContainer for \"84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce\"" Nov 23 23:10:35.230029 containerd[1503]: time="2025-11-23T23:10:35.229999580Z" level=info msg="Container a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:10:35.230692 containerd[1503]: time="2025-11-23T23:10:35.230563750Z" level=info msg="connecting to shim 84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce" address="unix:///run/containerd/s/18e59d34f18e049137b5d400e68ba930cbc3c091415129d55656476fbc8ece93" protocol=ttrpc version=3 Nov 23 23:10:35.232094 kubelet[2297]: I1123 23:10:35.232064 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:10:35.232542 kubelet[2297]: E1123 23:10:35.232508 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Nov 23 23:10:35.234473 containerd[1503]: time="2025-11-23T23:10:35.234428826Z" level=info msg="CreateContainer within sandbox \"738d87b5539b2661b32c71550b90286a247669a7d6ca7d6cd1eff99b8bfe1dc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1\"" Nov 23 23:10:35.235387 containerd[1503]: time="2025-11-23T23:10:35.235355850Z" level=info msg="StartContainer for \"b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1\"" Nov 23 23:10:35.237129 containerd[1503]: time="2025-11-23T23:10:35.236928835Z" level=info msg="connecting to shim b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1" address="unix:///run/containerd/s/d2e40f8f1cd07470feda3cb3e466d2ca35dbd227ccf54d278cac5b169aaf9ca8" protocol=ttrpc version=3 Nov 23 23:10:35.240307 containerd[1503]: time="2025-11-23T23:10:35.240253270Z" level=info msg="CreateContainer within sandbox \"3c1c68fe60a81b32b147b89a4674ea2fc3287f51ef365b771fafdc7acfd77b0c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea\"" Nov 23 23:10:35.241023 containerd[1503]: time="2025-11-23T23:10:35.240988783Z" level=info msg="StartContainer for \"a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea\"" Nov 23 23:10:35.242231 containerd[1503]: time="2025-11-23T23:10:35.242183787Z" level=info msg="connecting to shim a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea" address="unix:///run/containerd/s/c12c8eab64fe46cc89fc95d7896f2fa70ed1ecec491c4c443a64c945a01829c1" protocol=ttrpc version=3 Nov 23 23:10:35.255450 systemd[1]: Started cri-containerd-84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce.scope - libcontainer container 84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce. Nov 23 23:10:35.260624 systemd[1]: Started cri-containerd-b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1.scope - libcontainer container b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1. Nov 23 23:10:35.273116 systemd[1]: Started cri-containerd-a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea.scope - libcontainer container a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea. Nov 23 23:10:35.299932 kubelet[2297]: E1123 23:10:35.299871 2297 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:10:35.317664 containerd[1503]: time="2025-11-23T23:10:35.314867911Z" level=info msg="StartContainer for \"84c0b2681b5eb7e6542172b6772f1aefe6d47c756abf3878543c70fb7814f9ce\" returns successfully" Nov 23 23:10:35.321023 containerd[1503]: time="2025-11-23T23:10:35.320989585Z" level=info msg="StartContainer for \"b92d8d2ce92badb3ff7e5dcf7e3a6708f89b8668fa0e777b9724d9481ef377e1\" returns successfully" Nov 23 23:10:35.335609 containerd[1503]: time="2025-11-23T23:10:35.335565680Z" level=info msg="StartContainer for \"a12602e8159e0f8147bb1add782529d6224a81b59569d54c601ea4ed9fd6e9ea\" returns successfully" Nov 23 23:10:35.410338 kubelet[2297]: E1123 23:10:35.410228 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:35.417991 kubelet[2297]: E1123 23:10:35.417961 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:35.421207 kubelet[2297]: E1123 23:10:35.421186 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:36.034253 kubelet[2297]: I1123 23:10:36.034220 2297 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:10:36.423893 kubelet[2297]: E1123 23:10:36.423675 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:36.423893 kubelet[2297]: E1123 23:10:36.423818 2297 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:10:37.510281 kubelet[2297]: E1123 23:10:37.510233 2297 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 23:10:37.681370 kubelet[2297]: I1123 23:10:37.681306 2297 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:10:37.681370 kubelet[2297]: E1123 23:10:37.681349 2297 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 23 23:10:37.775078 kubelet[2297]: I1123 23:10:37.774888 2297 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:37.780776 kubelet[2297]: E1123 23:10:37.780518 2297 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:37.780776 kubelet[2297]: I1123 23:10:37.780565 2297 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:37.782480 kubelet[2297]: E1123 23:10:37.782287 2297 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:37.782480 kubelet[2297]: I1123 23:10:37.782313 2297 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:10:37.783862 kubelet[2297]: E1123 23:10:37.783838 2297 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:10:38.366328 kubelet[2297]: I1123 23:10:38.366198 2297 apiserver.go:52] "Watching apiserver" Nov 23 23:10:38.374893 kubelet[2297]: I1123 23:10:38.374821 2297 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:10:39.563807 kubelet[2297]: I1123 23:10:39.563770 2297 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:39.830277 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Nov 23 23:10:39.830295 systemd[1]: Reloading... Nov 23 23:10:39.921001 zram_generator::config[2628]: No configuration found. Nov 23 23:10:40.096192 systemd[1]: Reloading finished in 265 ms. Nov 23 23:10:40.129763 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:40.142837 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:10:40.143148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:40.143211 systemd[1]: kubelet.service: Consumed 873ms CPU time, 127.8M memory peak. Nov 23 23:10:40.145036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:10:40.303615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:10:40.314068 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:10:40.354212 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:10:40.354212 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:10:40.354212 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:10:40.354212 kubelet[2670]: I1123 23:10:40.354168 2670 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:10:40.360335 kubelet[2670]: I1123 23:10:40.360291 2670 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:10:40.360335 kubelet[2670]: I1123 23:10:40.360335 2670 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:10:40.360743 kubelet[2670]: I1123 23:10:40.360719 2670 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:10:40.362565 kubelet[2670]: I1123 23:10:40.362536 2670 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 23:10:40.364947 kubelet[2670]: I1123 23:10:40.364917 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:10:40.368556 kubelet[2670]: I1123 23:10:40.368533 2670 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:10:40.371434 kubelet[2670]: I1123 23:10:40.371389 2670 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:10:40.371714 kubelet[2670]: I1123 23:10:40.371668 2670 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:10:40.371959 kubelet[2670]: I1123 23:10:40.371697 2670 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:10:40.372038 kubelet[2670]: I1123 23:10:40.371973 2670 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:10:40.372038 kubelet[2670]: I1123 23:10:40.371985 2670 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:10:40.372038 kubelet[2670]: I1123 23:10:40.372034 2670 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:10:40.372201 kubelet[2670]: I1123 23:10:40.372187 2670 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:10:40.372232 kubelet[2670]: I1123 23:10:40.372204 2670 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:10:40.372232 kubelet[2670]: I1123 23:10:40.372229 2670 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:10:40.372291 kubelet[2670]: I1123 23:10:40.372241 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:10:40.377925 kubelet[2670]: I1123 23:10:40.377182 2670 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:10:40.378092 kubelet[2670]: I1123 23:10:40.378075 2670 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:10:40.382614 kubelet[2670]: I1123 23:10:40.382576 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:10:40.382717 kubelet[2670]: I1123 23:10:40.382624 2670 server.go:1289] "Started kubelet" Nov 23 23:10:40.384663 kubelet[2670]: I1123 23:10:40.383583 2670 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:10:40.384873 kubelet[2670]: I1123 23:10:40.383666 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:10:40.385117 kubelet[2670]: I1123 23:10:40.385098 2670 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:10:40.389688 kubelet[2670]: I1123 23:10:40.389637 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:10:40.389820 kubelet[2670]: I1123 23:10:40.389781 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:10:40.390133 kubelet[2670]: I1123 23:10:40.390119 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:10:40.390241 kubelet[2670]: E1123 23:10:40.390220 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:10:40.390377 kubelet[2670]: I1123 23:10:40.390318 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:10:40.390466 kubelet[2670]: I1123 23:10:40.390448 2670 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:10:40.392446 kubelet[2670]: I1123 23:10:40.392402 2670 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:10:40.400614 kubelet[2670]: E1123 23:10:40.400553 2670 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:10:40.415239 kubelet[2670]: I1123 23:10:40.415148 2670 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:10:40.418676 kubelet[2670]: I1123 23:10:40.418638 2670 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:10:40.418676 kubelet[2670]: I1123 23:10:40.418660 2670 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:10:40.426148 kubelet[2670]: I1123 23:10:40.426011 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:10:40.427754 kubelet[2670]: I1123 23:10:40.427022 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:10:40.427754 kubelet[2670]: I1123 23:10:40.427047 2670 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:10:40.427754 kubelet[2670]: I1123 23:10:40.427069 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:10:40.427754 kubelet[2670]: I1123 23:10:40.427076 2670 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:10:40.427754 kubelet[2670]: E1123 23:10:40.427115 2670 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:10:40.464045 kubelet[2670]: I1123 23:10:40.463865 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:10:40.464045 kubelet[2670]: I1123 23:10:40.463984 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:10:40.464045 kubelet[2670]: I1123 23:10:40.464007 2670 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:10:40.464396 kubelet[2670]: I1123 23:10:40.464374 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:10:40.464483 kubelet[2670]: I1123 23:10:40.464455 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:10:40.464544 kubelet[2670]: I1123 23:10:40.464536 2670 policy_none.go:49] "None policy: Start" Nov 23 23:10:40.464594 kubelet[2670]: I1123 23:10:40.464586 2670 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:10:40.464711 kubelet[2670]: I1123 23:10:40.464643 2670 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:10:40.464819 kubelet[2670]: I1123 23:10:40.464807 2670 state_mem.go:75] "Updated machine memory state" Nov 23 23:10:40.471169 kubelet[2670]: E1123 23:10:40.471135 2670 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:10:40.471472 kubelet[2670]: I1123 23:10:40.471453 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:10:40.471577 kubelet[2670]: I1123 23:10:40.471545 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:10:40.471817 kubelet[2670]: I1123 23:10:40.471795 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:10:40.473230 kubelet[2670]: E1123 23:10:40.473204 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:10:40.528824 kubelet[2670]: I1123 23:10:40.528628 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:40.528824 kubelet[2670]: I1123 23:10:40.528675 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:10:40.528824 kubelet[2670]: I1123 23:10:40.528766 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:40.536756 kubelet[2670]: E1123 23:10:40.536112 2670 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:40.574285 kubelet[2670]: I1123 23:10:40.574244 2670 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:10:40.583920 kubelet[2670]: I1123 23:10:40.583864 2670 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 23:10:40.584184 kubelet[2670]: I1123 23:10:40.584134 2670 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:10:40.591510 kubelet[2670]: I1123 23:10:40.591456 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:40.591854 kubelet[2670]: I1123 23:10:40.591740 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:40.591854 kubelet[2670]: I1123 23:10:40.591774 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bed01379e0793ed5be881848a6990c96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bed01379e0793ed5be881848a6990c96\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:40.591854 kubelet[2670]: I1123 23:10:40.591809 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:40.591854 kubelet[2670]: I1123 23:10:40.591823 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e51b49401d7e125d16957469facd7352-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e51b49401d7e125d16957469facd7352\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:10:40.592570 kubelet[2670]: I1123 23:10:40.592002 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:40.592732 kubelet[2670]: I1123 23:10:40.592712 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:40.593147 kubelet[2670]: I1123 23:10:40.593127 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:40.593285 kubelet[2670]: I1123 23:10:40.593255 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5832191310254249cf17c2353d71ec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1d5832191310254249cf17c2353d71ec\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:10:41.373259 kubelet[2670]: I1123 23:10:41.373003 2670 apiserver.go:52] "Watching apiserver" Nov 23 23:10:41.391137 kubelet[2670]: I1123 23:10:41.391087 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:10:41.453412 kubelet[2670]: I1123 23:10:41.453168 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:41.461217 kubelet[2670]: E1123 23:10:41.461167 2670 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:10:41.474163 kubelet[2670]: I1123 23:10:41.474097 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.474072369 podStartE2EDuration="2.474072369s" podCreationTimestamp="2025-11-23 23:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:10:41.473932334 +0000 UTC m=+1.156037313" watchObservedRunningTime="2025-11-23 23:10:41.474072369 +0000 UTC m=+1.156177348" Nov 23 23:10:41.493357 kubelet[2670]: I1123 23:10:41.493269 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.493250767 podStartE2EDuration="1.493250767s" podCreationTimestamp="2025-11-23 23:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:10:41.484166476 +0000 UTC m=+1.166271495" watchObservedRunningTime="2025-11-23 23:10:41.493250767 +0000 UTC m=+1.175355706" Nov 23 23:10:41.507868 kubelet[2670]: I1123 23:10:41.507809 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.507763548 podStartE2EDuration="1.507763548s" podCreationTimestamp="2025-11-23 23:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:10:41.493834074 +0000 UTC m=+1.175939053" watchObservedRunningTime="2025-11-23 23:10:41.507763548 +0000 UTC m=+1.189868527" Nov 23 23:10:46.440236 kubelet[2670]: I1123 23:10:46.440196 2670 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:10:46.440688 containerd[1503]: time="2025-11-23T23:10:46.440532152Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:10:46.440869 kubelet[2670]: I1123 23:10:46.440768 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:10:47.410954 systemd[1]: Created slice kubepods-besteffort-poddb2d6ff2_2d6b_419a_ae26_f18a095b1951.slice - libcontainer container kubepods-besteffort-poddb2d6ff2_2d6b_419a_ae26_f18a095b1951.slice. Nov 23 23:10:47.434732 kubelet[2670]: I1123 23:10:47.434597 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db2d6ff2-2d6b-419a-ae26-f18a095b1951-lib-modules\") pod \"kube-proxy-cs4jf\" (UID: \"db2d6ff2-2d6b-419a-ae26-f18a095b1951\") " pod="kube-system/kube-proxy-cs4jf" Nov 23 23:10:47.434732 kubelet[2670]: I1123 23:10:47.434658 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfjn9\" (UniqueName: \"kubernetes.io/projected/db2d6ff2-2d6b-419a-ae26-f18a095b1951-kube-api-access-tfjn9\") pod \"kube-proxy-cs4jf\" (UID: \"db2d6ff2-2d6b-419a-ae26-f18a095b1951\") " pod="kube-system/kube-proxy-cs4jf" Nov 23 23:10:47.434732 kubelet[2670]: I1123 23:10:47.434680 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db2d6ff2-2d6b-419a-ae26-f18a095b1951-kube-proxy\") pod \"kube-proxy-cs4jf\" (UID: \"db2d6ff2-2d6b-419a-ae26-f18a095b1951\") " pod="kube-system/kube-proxy-cs4jf" Nov 23 23:10:47.434732 kubelet[2670]: I1123 23:10:47.434695 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db2d6ff2-2d6b-419a-ae26-f18a095b1951-xtables-lock\") pod \"kube-proxy-cs4jf\" (UID: \"db2d6ff2-2d6b-419a-ae26-f18a095b1951\") " pod="kube-system/kube-proxy-cs4jf" Nov 23 23:10:47.621452 systemd[1]: Created slice kubepods-besteffort-podbc2958eb_95ef_45c9_ae43_d9f46f50f205.slice - libcontainer container kubepods-besteffort-podbc2958eb_95ef_45c9_ae43_d9f46f50f205.slice. Nov 23 23:10:47.636830 kubelet[2670]: I1123 23:10:47.636777 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc2958eb-95ef-45c9-ae43-d9f46f50f205-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vkcxp\" (UID: \"bc2958eb-95ef-45c9-ae43-d9f46f50f205\") " pod="tigera-operator/tigera-operator-7dcd859c48-vkcxp" Nov 23 23:10:47.637317 kubelet[2670]: I1123 23:10:47.637267 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v8c2\" (UniqueName: \"kubernetes.io/projected/bc2958eb-95ef-45c9-ae43-d9f46f50f205-kube-api-access-5v8c2\") pod \"tigera-operator-7dcd859c48-vkcxp\" (UID: \"bc2958eb-95ef-45c9-ae43-d9f46f50f205\") " pod="tigera-operator/tigera-operator-7dcd859c48-vkcxp" Nov 23 23:10:47.723175 containerd[1503]: time="2025-11-23T23:10:47.723072111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cs4jf,Uid:db2d6ff2-2d6b-419a-ae26-f18a095b1951,Namespace:kube-system,Attempt:0,}" Nov 23 23:10:47.740163 containerd[1503]: time="2025-11-23T23:10:47.740119952Z" level=info msg="connecting to shim b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791" address="unix:///run/containerd/s/d720b60b607a7f2e579b1ae0fee0787bfe91d083f04b000fbe3669a4d8beff9a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:10:47.776136 systemd[1]: Started cri-containerd-b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791.scope - libcontainer container b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791. Nov 23 23:10:47.801149 containerd[1503]: time="2025-11-23T23:10:47.801112519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cs4jf,Uid:db2d6ff2-2d6b-419a-ae26-f18a095b1951,Namespace:kube-system,Attempt:0,} returns sandbox id \"b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791\"" Nov 23 23:10:47.806769 containerd[1503]: time="2025-11-23T23:10:47.806731441Z" level=info msg="CreateContainer within sandbox \"b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:10:47.815574 containerd[1503]: time="2025-11-23T23:10:47.815538390Z" level=info msg="Container 7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:10:47.823333 containerd[1503]: time="2025-11-23T23:10:47.823285757Z" level=info msg="CreateContainer within sandbox \"b467b4e35e3d07e5812f1242759289b9f2286af6041ee1d7bc082be4872ac791\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734\"" Nov 23 23:10:47.824123 containerd[1503]: time="2025-11-23T23:10:47.824094855Z" level=info msg="StartContainer for \"7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734\"" Nov 23 23:10:47.825568 containerd[1503]: time="2025-11-23T23:10:47.825544544Z" level=info msg="connecting to shim 7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734" address="unix:///run/containerd/s/d720b60b607a7f2e579b1ae0fee0787bfe91d083f04b000fbe3669a4d8beff9a" protocol=ttrpc version=3 Nov 23 23:10:47.856123 systemd[1]: Started cri-containerd-7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734.scope - libcontainer container 7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734. Nov 23 23:10:47.925918 containerd[1503]: time="2025-11-23T23:10:47.925342317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vkcxp,Uid:bc2958eb-95ef-45c9-ae43-d9f46f50f205,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:10:47.945056 containerd[1503]: time="2025-11-23T23:10:47.945004085Z" level=info msg="connecting to shim 81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f" address="unix:///run/containerd/s/da5b58a987feadb447f80b810544a1db777fcb39aa67a767e7d5cc976787d30b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:10:47.946064 containerd[1503]: time="2025-11-23T23:10:47.946035222Z" level=info msg="StartContainer for \"7e76f5b6b6a17d4c4b0b7c58373e8280970da4846aa5fdf7a64a1e6fe0857734\" returns successfully" Nov 23 23:10:47.972145 systemd[1]: Started cri-containerd-81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f.scope - libcontainer container 81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f. Nov 23 23:10:48.014808 containerd[1503]: time="2025-11-23T23:10:48.014690794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vkcxp,Uid:bc2958eb-95ef-45c9-ae43-d9f46f50f205,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f\"" Nov 23 23:10:48.016973 containerd[1503]: time="2025-11-23T23:10:48.016934594Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:10:48.501815 kubelet[2670]: I1123 23:10:48.501387 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cs4jf" podStartSLOduration=1.5013715460000001 podStartE2EDuration="1.501371546s" podCreationTimestamp="2025-11-23 23:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:10:48.500745846 +0000 UTC m=+8.182850825" watchObservedRunningTime="2025-11-23 23:10:48.501371546 +0000 UTC m=+8.183476525" Nov 23 23:10:48.550045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293043905.mount: Deactivated successfully. Nov 23 23:10:49.080350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685577779.mount: Deactivated successfully. Nov 23 23:10:49.373202 containerd[1503]: time="2025-11-23T23:10:49.373138401Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:49.374520 containerd[1503]: time="2025-11-23T23:10:49.373861190Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:10:49.374965 containerd[1503]: time="2025-11-23T23:10:49.374932151Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:49.377118 containerd[1503]: time="2025-11-23T23:10:49.377010024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:10:49.378333 containerd[1503]: time="2025-11-23T23:10:49.378223046Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.361250126s" Nov 23 23:10:49.378333 containerd[1503]: time="2025-11-23T23:10:49.378256891Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:10:49.387182 containerd[1503]: time="2025-11-23T23:10:49.387110144Z" level=info msg="CreateContainer within sandbox \"81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:10:49.418500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983156317.mount: Deactivated successfully. Nov 23 23:10:49.426996 containerd[1503]: time="2025-11-23T23:10:49.426950462Z" level=info msg="Container 2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:10:49.433802 containerd[1503]: time="2025-11-23T23:10:49.433745565Z" level=info msg="CreateContainer within sandbox \"81345b34c90d908b52834ac472ab2a7f9a8a0396f694d32888d73025ee7d1b5f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3\"" Nov 23 23:10:49.434330 containerd[1503]: time="2025-11-23T23:10:49.434292887Z" level=info msg="StartContainer for \"2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3\"" Nov 23 23:10:49.435327 containerd[1503]: time="2025-11-23T23:10:49.435296398Z" level=info msg="connecting to shim 2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3" address="unix:///run/containerd/s/da5b58a987feadb447f80b810544a1db777fcb39aa67a767e7d5cc976787d30b" protocol=ttrpc version=3 Nov 23 23:10:49.455142 systemd[1]: Started cri-containerd-2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3.scope - libcontainer container 2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3. Nov 23 23:10:49.509224 containerd[1503]: time="2025-11-23T23:10:49.509184562Z" level=info msg="StartContainer for \"2f6bf79974e07943037223c5cb38da2e365f508f47106d6cd090c972d1378fb3\" returns successfully" Nov 23 23:10:53.379951 update_engine[1488]: I20251123 23:10:53.379463 1488 update_attempter.cc:509] Updating boot flags... Nov 23 23:10:54.645071 kubelet[2670]: I1123 23:10:54.644838 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vkcxp" podStartSLOduration=6.278763115 podStartE2EDuration="7.644788207s" podCreationTimestamp="2025-11-23 23:10:47 +0000 UTC" firstStartedPulling="2025-11-23 23:10:48.016280049 +0000 UTC m=+7.698384988" lastFinishedPulling="2025-11-23 23:10:49.382305101 +0000 UTC m=+9.064410080" observedRunningTime="2025-11-23 23:10:50.510988465 +0000 UTC m=+10.193093444" watchObservedRunningTime="2025-11-23 23:10:54.644788207 +0000 UTC m=+14.326893226" Nov 23 23:10:55.035167 sudo[1720]: pam_unix(sudo:session): session closed for user root Nov 23 23:10:55.037439 sshd[1719]: Connection closed by 10.0.0.1 port 56034 Nov 23 23:10:55.037929 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Nov 23 23:10:55.043097 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:56034.service: Deactivated successfully. Nov 23 23:10:55.045334 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:10:55.046940 systemd[1]: session-7.scope: Consumed 8.790s CPU time, 220.5M memory peak. Nov 23 23:10:55.051661 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:10:55.054067 systemd-logind[1483]: Removed session 7. Nov 23 23:11:03.551036 systemd[1]: Created slice kubepods-besteffort-pod0d8e29ea_3091_435c_b209_5452c5b06d7f.slice - libcontainer container kubepods-besteffort-pod0d8e29ea_3091_435c_b209_5452c5b06d7f.slice. Nov 23 23:11:03.643093 kubelet[2670]: I1123 23:11:03.642885 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d8e29ea-3091-435c-b209-5452c5b06d7f-typha-certs\") pod \"calico-typha-85d794ff5c-vpv7n\" (UID: \"0d8e29ea-3091-435c-b209-5452c5b06d7f\") " pod="calico-system/calico-typha-85d794ff5c-vpv7n" Nov 23 23:11:03.643093 kubelet[2670]: I1123 23:11:03.642955 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvxj\" (UniqueName: \"kubernetes.io/projected/0d8e29ea-3091-435c-b209-5452c5b06d7f-kube-api-access-bsvxj\") pod \"calico-typha-85d794ff5c-vpv7n\" (UID: \"0d8e29ea-3091-435c-b209-5452c5b06d7f\") " pod="calico-system/calico-typha-85d794ff5c-vpv7n" Nov 23 23:11:03.643093 kubelet[2670]: I1123 23:11:03.643030 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8e29ea-3091-435c-b209-5452c5b06d7f-tigera-ca-bundle\") pod \"calico-typha-85d794ff5c-vpv7n\" (UID: \"0d8e29ea-3091-435c-b209-5452c5b06d7f\") " pod="calico-system/calico-typha-85d794ff5c-vpv7n" Nov 23 23:11:03.733981 systemd[1]: Created slice kubepods-besteffort-pod34b8865e_b75a_40ba_9cff_3882aa184c6d.slice - libcontainer container kubepods-besteffort-pod34b8865e_b75a_40ba_9cff_3882aa184c6d.slice. Nov 23 23:11:03.743318 kubelet[2670]: I1123 23:11:03.743265 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-var-run-calico\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743318 kubelet[2670]: I1123 23:11:03.743312 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-cni-log-dir\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743318 kubelet[2670]: I1123 23:11:03.743333 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-policysync\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743651 kubelet[2670]: I1123 23:11:03.743349 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34b8865e-b75a-40ba-9cff-3882aa184c6d-tigera-ca-bundle\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743651 kubelet[2670]: I1123 23:11:03.743366 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48kt\" (UniqueName: \"kubernetes.io/projected/34b8865e-b75a-40ba-9cff-3882aa184c6d-kube-api-access-q48kt\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743651 kubelet[2670]: I1123 23:11:03.743425 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-flexvol-driver-host\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743651 kubelet[2670]: I1123 23:11:03.743508 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34b8865e-b75a-40ba-9cff-3882aa184c6d-node-certs\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743651 kubelet[2670]: I1123 23:11:03.743577 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-xtables-lock\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743766 kubelet[2670]: I1123 23:11:03.743649 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-cni-bin-dir\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743766 kubelet[2670]: I1123 23:11:03.743666 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-cni-net-dir\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743810 kubelet[2670]: I1123 23:11:03.743763 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-lib-modules\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.743810 kubelet[2670]: I1123 23:11:03.743785 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34b8865e-b75a-40ba-9cff-3882aa184c6d-var-lib-calico\") pod \"calico-node-p89th\" (UID: \"34b8865e-b75a-40ba-9cff-3882aa184c6d\") " pod="calico-system/calico-node-p89th" Nov 23 23:11:03.850047 kubelet[2670]: E1123 23:11:03.850010 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.850047 kubelet[2670]: W1123 23:11:03.850041 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.855119 kubelet[2670]: E1123 23:11:03.855082 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:03.856017 containerd[1503]: time="2025-11-23T23:11:03.855727739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d794ff5c-vpv7n,Uid:0d8e29ea-3091-435c-b209-5452c5b06d7f,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:03.857621 kubelet[2670]: E1123 23:11:03.857570 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.863746 kubelet[2670]: E1123 23:11:03.863719 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.863981 kubelet[2670]: W1123 23:11:03.863866 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.863981 kubelet[2670]: E1123 23:11:03.863894 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.905586 containerd[1503]: time="2025-11-23T23:11:03.905322804Z" level=info msg="connecting to shim dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55" address="unix:///run/containerd/s/01ddc3a5e184c0dab187dc428170383a22f40f9a4bc32ebbdb3f33807ac82ec8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:03.906325 kubelet[2670]: E1123 23:11:03.906241 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:03.933019 kubelet[2670]: E1123 23:11:03.932988 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.933019 kubelet[2670]: W1123 23:11:03.933013 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.933172 kubelet[2670]: E1123 23:11:03.933034 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.933226 kubelet[2670]: E1123 23:11:03.933212 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.935658 kubelet[2670]: W1123 23:11:03.933223 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.935709 kubelet[2670]: E1123 23:11:03.935673 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.936121 kubelet[2670]: E1123 23:11:03.936102 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.936174 kubelet[2670]: W1123 23:11:03.936123 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.936174 kubelet[2670]: E1123 23:11:03.936137 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.936409 kubelet[2670]: E1123 23:11:03.936392 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.936437 kubelet[2670]: W1123 23:11:03.936407 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.936437 kubelet[2670]: E1123 23:11:03.936418 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.936641 kubelet[2670]: E1123 23:11:03.936627 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.936680 kubelet[2670]: W1123 23:11:03.936642 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.936680 kubelet[2670]: E1123 23:11:03.936653 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.936894 kubelet[2670]: E1123 23:11:03.936878 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.936943 kubelet[2670]: W1123 23:11:03.936927 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.936977 kubelet[2670]: E1123 23:11:03.936944 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.937255 kubelet[2670]: E1123 23:11:03.937229 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.937309 kubelet[2670]: W1123 23:11:03.937264 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.937309 kubelet[2670]: E1123 23:11:03.937276 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.937543 kubelet[2670]: E1123 23:11:03.937525 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.937543 kubelet[2670]: W1123 23:11:03.937543 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.937605 kubelet[2670]: E1123 23:11:03.937569 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.937841 kubelet[2670]: E1123 23:11:03.937825 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.937879 kubelet[2670]: W1123 23:11:03.937841 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.937879 kubelet[2670]: E1123 23:11:03.937867 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.938136 kubelet[2670]: E1123 23:11:03.938119 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.938136 kubelet[2670]: W1123 23:11:03.938135 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.938195 kubelet[2670]: E1123 23:11:03.938145 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.938347 kubelet[2670]: E1123 23:11:03.938333 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.938388 kubelet[2670]: W1123 23:11:03.938366 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.938388 kubelet[2670]: E1123 23:11:03.938377 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.939235 kubelet[2670]: E1123 23:11:03.938985 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.939235 kubelet[2670]: W1123 23:11:03.939030 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.939235 kubelet[2670]: E1123 23:11:03.939061 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.939364 kubelet[2670]: E1123 23:11:03.939350 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.939364 kubelet[2670]: W1123 23:11:03.939360 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.939407 kubelet[2670]: E1123 23:11:03.939369 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.939931 kubelet[2670]: E1123 23:11:03.939916 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.939931 kubelet[2670]: W1123 23:11:03.939930 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.940090 kubelet[2670]: E1123 23:11:03.939941 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.940196 kubelet[2670]: E1123 23:11:03.940182 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.940196 kubelet[2670]: W1123 23:11:03.940195 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.940283 kubelet[2670]: E1123 23:11:03.940205 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.940454 kubelet[2670]: E1123 23:11:03.940433 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.940724 kubelet[2670]: W1123 23:11:03.940525 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.940724 kubelet[2670]: E1123 23:11:03.940551 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.940812 kubelet[2670]: E1123 23:11:03.940783 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.940812 kubelet[2670]: W1123 23:11:03.940801 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.940812 kubelet[2670]: E1123 23:11:03.940811 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.941335 systemd[1]: Started cri-containerd-dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55.scope - libcontainer container dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55. Nov 23 23:11:03.941647 kubelet[2670]: E1123 23:11:03.941633 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.941647 kubelet[2670]: W1123 23:11:03.941646 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.941734 kubelet[2670]: E1123 23:11:03.941656 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.941903 kubelet[2670]: E1123 23:11:03.941887 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.941941 kubelet[2670]: W1123 23:11:03.941906 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.941941 kubelet[2670]: E1123 23:11:03.941915 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.942154 kubelet[2670]: E1123 23:11:03.942056 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.942154 kubelet[2670]: W1123 23:11:03.942063 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.942154 kubelet[2670]: E1123 23:11:03.942070 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.945564 kubelet[2670]: E1123 23:11:03.945531 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.945880 kubelet[2670]: W1123 23:11:03.945702 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.945880 kubelet[2670]: E1123 23:11:03.945737 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.945880 kubelet[2670]: I1123 23:11:03.945771 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02b80ccd-71ac-4684-b4ef-36bab9efb9cc-socket-dir\") pod \"csi-node-driver-2tmhj\" (UID: \"02b80ccd-71ac-4684-b4ef-36bab9efb9cc\") " pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:03.946104 kubelet[2670]: E1123 23:11:03.946075 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.946213 kubelet[2670]: W1123 23:11:03.946163 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.946362 kubelet[2670]: E1123 23:11:03.946282 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.946362 kubelet[2670]: I1123 23:11:03.946317 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/02b80ccd-71ac-4684-b4ef-36bab9efb9cc-varrun\") pod \"csi-node-driver-2tmhj\" (UID: \"02b80ccd-71ac-4684-b4ef-36bab9efb9cc\") " pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:03.947194 kubelet[2670]: E1123 23:11:03.947175 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.947194 kubelet[2670]: W1123 23:11:03.947193 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.947272 kubelet[2670]: E1123 23:11:03.947206 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.947376 kubelet[2670]: E1123 23:11:03.947363 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.947376 kubelet[2670]: W1123 23:11:03.947373 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.947422 kubelet[2670]: E1123 23:11:03.947381 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.947574 kubelet[2670]: E1123 23:11:03.947560 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.947574 kubelet[2670]: W1123 23:11:03.947571 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.947629 kubelet[2670]: E1123 23:11:03.947579 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.947730 kubelet[2670]: E1123 23:11:03.947717 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.947730 kubelet[2670]: W1123 23:11:03.947727 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.947787 kubelet[2670]: E1123 23:11:03.947735 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.947876 kubelet[2670]: E1123 23:11:03.947865 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.947876 kubelet[2670]: W1123 23:11:03.947874 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.947980 kubelet[2670]: E1123 23:11:03.947882 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.947980 kubelet[2670]: I1123 23:11:03.947918 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b80ccd-71ac-4684-b4ef-36bab9efb9cc-kubelet-dir\") pod \"csi-node-driver-2tmhj\" (UID: \"02b80ccd-71ac-4684-b4ef-36bab9efb9cc\") " pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:03.948104 kubelet[2670]: E1123 23:11:03.948091 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.948104 kubelet[2670]: W1123 23:11:03.948103 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.948321 kubelet[2670]: E1123 23:11:03.948111 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.948321 kubelet[2670]: I1123 23:11:03.948133 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02b80ccd-71ac-4684-b4ef-36bab9efb9cc-registration-dir\") pod \"csi-node-driver-2tmhj\" (UID: \"02b80ccd-71ac-4684-b4ef-36bab9efb9cc\") " pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:03.948453 kubelet[2670]: E1123 23:11:03.948427 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.948510 kubelet[2670]: W1123 23:11:03.948497 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.948575 kubelet[2670]: E1123 23:11:03.948564 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.948820 kubelet[2670]: E1123 23:11:03.948806 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.949080 kubelet[2670]: W1123 23:11:03.948891 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.949080 kubelet[2670]: E1123 23:11:03.948959 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.949241 kubelet[2670]: E1123 23:11:03.949226 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.949294 kubelet[2670]: W1123 23:11:03.949283 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.949348 kubelet[2670]: E1123 23:11:03.949337 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.949415 kubelet[2670]: I1123 23:11:03.949403 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zpkn\" (UniqueName: \"kubernetes.io/projected/02b80ccd-71ac-4684-b4ef-36bab9efb9cc-kube-api-access-2zpkn\") pod \"csi-node-driver-2tmhj\" (UID: \"02b80ccd-71ac-4684-b4ef-36bab9efb9cc\") " pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:03.949681 kubelet[2670]: E1123 23:11:03.949663 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.949681 kubelet[2670]: W1123 23:11:03.949679 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.949749 kubelet[2670]: E1123 23:11:03.949691 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.949850 kubelet[2670]: E1123 23:11:03.949838 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.949850 kubelet[2670]: W1123 23:11:03.949849 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.949943 kubelet[2670]: E1123 23:11:03.949857 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.950046 kubelet[2670]: E1123 23:11:03.950033 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.950046 kubelet[2670]: W1123 23:11:03.950044 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.950094 kubelet[2670]: E1123 23:11:03.950053 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.950188 kubelet[2670]: E1123 23:11:03.950177 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:03.950188 kubelet[2670]: W1123 23:11:03.950186 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:03.950240 kubelet[2670]: E1123 23:11:03.950194 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:03.978790 containerd[1503]: time="2025-11-23T23:11:03.978745643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d794ff5c-vpv7n,Uid:0d8e29ea-3091-435c-b209-5452c5b06d7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55\"" Nov 23 23:11:03.982616 kubelet[2670]: E1123 23:11:03.982544 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:03.987077 containerd[1503]: time="2025-11-23T23:11:03.987039908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:11:04.038344 kubelet[2670]: E1123 23:11:04.038282 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:04.039660 containerd[1503]: time="2025-11-23T23:11:04.039618808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p89th,Uid:34b8865e-b75a-40ba-9cff-3882aa184c6d,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:04.050092 kubelet[2670]: E1123 23:11:04.050059 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.050092 kubelet[2670]: W1123 23:11:04.050085 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.050227 kubelet[2670]: E1123 23:11:04.050116 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.050376 kubelet[2670]: E1123 23:11:04.050362 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.050376 kubelet[2670]: W1123 23:11:04.050376 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.050428 kubelet[2670]: E1123 23:11:04.050392 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.050662 kubelet[2670]: E1123 23:11:04.050644 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.050693 kubelet[2670]: W1123 23:11:04.050662 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.050693 kubelet[2670]: E1123 23:11:04.050676 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.050847 kubelet[2670]: E1123 23:11:04.050835 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.050847 kubelet[2670]: W1123 23:11:04.050846 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.050909 kubelet[2670]: E1123 23:11:04.050855 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.051043 kubelet[2670]: E1123 23:11:04.051029 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.051043 kubelet[2670]: W1123 23:11:04.051041 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.051234 kubelet[2670]: E1123 23:11:04.051050 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.051337 kubelet[2670]: E1123 23:11:04.051316 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.051421 kubelet[2670]: W1123 23:11:04.051408 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.051546 kubelet[2670]: E1123 23:11:04.051533 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.051929 kubelet[2670]: E1123 23:11:04.051831 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.051929 kubelet[2670]: W1123 23:11:04.051843 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.051929 kubelet[2670]: E1123 23:11:04.051855 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.052205 kubelet[2670]: E1123 23:11:04.052190 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.052263 kubelet[2670]: W1123 23:11:04.052253 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.052325 kubelet[2670]: E1123 23:11:04.052315 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.052709 kubelet[2670]: E1123 23:11:04.052573 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.052709 kubelet[2670]: W1123 23:11:04.052586 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.052709 kubelet[2670]: E1123 23:11:04.052597 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.052888 kubelet[2670]: E1123 23:11:04.052875 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.053071 kubelet[2670]: W1123 23:11:04.052953 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.053071 kubelet[2670]: E1123 23:11:04.052969 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.053215 kubelet[2670]: E1123 23:11:04.053201 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.053272 kubelet[2670]: W1123 23:11:04.053261 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.053324 kubelet[2670]: E1123 23:11:04.053315 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.053682 kubelet[2670]: E1123 23:11:04.053576 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.053682 kubelet[2670]: W1123 23:11:04.053589 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.053682 kubelet[2670]: E1123 23:11:04.053600 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.053859 kubelet[2670]: E1123 23:11:04.053847 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.053947 kubelet[2670]: W1123 23:11:04.053935 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.053997 kubelet[2670]: E1123 23:11:04.053987 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.054212 kubelet[2670]: E1123 23:11:04.054199 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.054392 kubelet[2670]: W1123 23:11:04.054276 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.054392 kubelet[2670]: E1123 23:11:04.054291 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.054555 kubelet[2670]: E1123 23:11:04.054542 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.054615 kubelet[2670]: W1123 23:11:04.054603 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.054666 kubelet[2670]: E1123 23:11:04.054656 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.054944 kubelet[2670]: E1123 23:11:04.054928 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.055116 kubelet[2670]: W1123 23:11:04.054997 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.055116 kubelet[2670]: E1123 23:11:04.055017 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.055272 kubelet[2670]: E1123 23:11:04.055260 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.055425 kubelet[2670]: W1123 23:11:04.055311 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.055425 kubelet[2670]: E1123 23:11:04.055352 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.055680 kubelet[2670]: E1123 23:11:04.055667 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.055752 kubelet[2670]: W1123 23:11:04.055727 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.055830 kubelet[2670]: E1123 23:11:04.055804 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.056103 kubelet[2670]: E1123 23:11:04.056077 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.056203 kubelet[2670]: W1123 23:11:04.056189 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.056346 kubelet[2670]: E1123 23:11:04.056264 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.056629 kubelet[2670]: E1123 23:11:04.056595 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.056748 kubelet[2670]: W1123 23:11:04.056734 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.056824 kubelet[2670]: E1123 23:11:04.056813 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.057393 kubelet[2670]: E1123 23:11:04.057368 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.057393 kubelet[2670]: W1123 23:11:04.057389 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.057477 kubelet[2670]: E1123 23:11:04.057403 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.057635 kubelet[2670]: E1123 23:11:04.057620 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.057635 kubelet[2670]: W1123 23:11:04.057634 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.057699 kubelet[2670]: E1123 23:11:04.057643 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.057841 kubelet[2670]: E1123 23:11:04.057828 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.057841 kubelet[2670]: W1123 23:11:04.057840 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.057918 kubelet[2670]: E1123 23:11:04.057849 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.058116 kubelet[2670]: E1123 23:11:04.058100 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.058154 kubelet[2670]: W1123 23:11:04.058121 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.058154 kubelet[2670]: E1123 23:11:04.058133 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.058378 kubelet[2670]: E1123 23:11:04.058362 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.058378 kubelet[2670]: W1123 23:11:04.058376 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.058437 kubelet[2670]: E1123 23:11:04.058385 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.072562 kubelet[2670]: E1123 23:11:04.072531 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:04.072562 kubelet[2670]: W1123 23:11:04.072554 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:04.072756 kubelet[2670]: E1123 23:11:04.072575 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:04.076043 containerd[1503]: time="2025-11-23T23:11:04.075790196Z" level=info msg="connecting to shim 35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771" address="unix:///run/containerd/s/8249b8c686454ec5ad815feaddabba6dddae996918dfe74499bc19b9ed23f44a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:04.124139 systemd[1]: Started cri-containerd-35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771.scope - libcontainer container 35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771. Nov 23 23:11:04.154404 containerd[1503]: time="2025-11-23T23:11:04.154361410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p89th,Uid:34b8865e-b75a-40ba-9cff-3882aa184c6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\"" Nov 23 23:11:04.155313 kubelet[2670]: E1123 23:11:04.155282 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:04.836045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200259545.mount: Deactivated successfully. Nov 23 23:11:05.404154 containerd[1503]: time="2025-11-23T23:11:05.404093678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:05.404681 containerd[1503]: time="2025-11-23T23:11:05.404639747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:11:05.405686 containerd[1503]: time="2025-11-23T23:11:05.405645881Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:05.408017 containerd[1503]: time="2025-11-23T23:11:05.407984727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:05.408608 containerd[1503]: time="2025-11-23T23:11:05.408583719Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.421504368s" Nov 23 23:11:05.408646 containerd[1503]: time="2025-11-23T23:11:05.408615561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:11:05.409819 containerd[1503]: time="2025-11-23T23:11:05.409777543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:11:05.420970 containerd[1503]: time="2025-11-23T23:11:05.420397632Z" level=info msg="CreateContainer within sandbox \"dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:11:05.427851 kubelet[2670]: E1123 23:11:05.427799 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:05.430474 containerd[1503]: time="2025-11-23T23:11:05.430429490Z" level=info msg="Container d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:05.455691 containerd[1503]: time="2025-11-23T23:11:05.455620001Z" level=info msg="CreateContainer within sandbox \"dca29a9b00ffa12d5298dba75de45a8ec278b550ee20158abb9d870edd9b8b55\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3\"" Nov 23 23:11:05.456239 containerd[1503]: time="2025-11-23T23:11:05.456207072Z" level=info msg="StartContainer for \"d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3\"" Nov 23 23:11:05.457532 containerd[1503]: time="2025-11-23T23:11:05.457487181Z" level=info msg="connecting to shim d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3" address="unix:///run/containerd/s/01ddc3a5e184c0dab187dc428170383a22f40f9a4bc32ebbdb3f33807ac82ec8" protocol=ttrpc version=3 Nov 23 23:11:05.480111 systemd[1]: Started cri-containerd-d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3.scope - libcontainer container d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3. Nov 23 23:11:05.557671 containerd[1503]: time="2025-11-23T23:11:05.557558146Z" level=info msg="StartContainer for \"d71c353037c8bf7cb1341cad666dcb98470e822891a40a8e43d66bd447af8ea3\" returns successfully" Nov 23 23:11:06.513652 containerd[1503]: time="2025-11-23T23:11:06.513577800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:06.514131 containerd[1503]: time="2025-11-23T23:11:06.514106546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:11:06.514970 containerd[1503]: time="2025-11-23T23:11:06.514865705Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:06.516740 containerd[1503]: time="2025-11-23T23:11:06.516668875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:06.517352 containerd[1503]: time="2025-11-23T23:11:06.517309107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.107472841s" Nov 23 23:11:06.517423 containerd[1503]: time="2025-11-23T23:11:06.517354350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:11:06.522724 containerd[1503]: time="2025-11-23T23:11:06.522592813Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:11:06.530923 kubelet[2670]: E1123 23:11:06.529711 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:06.536788 containerd[1503]: time="2025-11-23T23:11:06.536739404Z" level=info msg="Container e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:06.549873 kubelet[2670]: I1123 23:11:06.549802 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85d794ff5c-vpv7n" podStartSLOduration=2.126308219 podStartE2EDuration="3.549735297s" podCreationTimestamp="2025-11-23 23:11:03 +0000 UTC" firstStartedPulling="2025-11-23 23:11:03.986214338 +0000 UTC m=+23.668319317" lastFinishedPulling="2025-11-23 23:11:05.409641416 +0000 UTC m=+25.091746395" observedRunningTime="2025-11-23 23:11:06.54740978 +0000 UTC m=+26.229514759" watchObservedRunningTime="2025-11-23 23:11:06.549735297 +0000 UTC m=+26.231840276" Nov 23 23:11:06.553743 containerd[1503]: time="2025-11-23T23:11:06.553619572Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8\"" Nov 23 23:11:06.554204 containerd[1503]: time="2025-11-23T23:11:06.554161080Z" level=info msg="StartContainer for \"e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8\"" Nov 23 23:11:06.556072 containerd[1503]: time="2025-11-23T23:11:06.556039454Z" level=info msg="connecting to shim e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8" address="unix:///run/containerd/s/8249b8c686454ec5ad815feaddabba6dddae996918dfe74499bc19b9ed23f44a" protocol=ttrpc version=3 Nov 23 23:11:06.561738 kubelet[2670]: E1123 23:11:06.561279 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.561738 kubelet[2670]: W1123 23:11:06.561298 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.561738 kubelet[2670]: E1123 23:11:06.561319 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.561738 kubelet[2670]: E1123 23:11:06.561497 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.561738 kubelet[2670]: W1123 23:11:06.561505 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.561738 kubelet[2670]: E1123 23:11:06.561547 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.561974 kubelet[2670]: E1123 23:11:06.561838 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.561974 kubelet[2670]: W1123 23:11:06.561850 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.561974 kubelet[2670]: E1123 23:11:06.561861 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.562779 kubelet[2670]: E1123 23:11:06.562061 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.562779 kubelet[2670]: W1123 23:11:06.562077 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.562779 kubelet[2670]: E1123 23:11:06.562088 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.562779 kubelet[2670]: E1123 23:11:06.562717 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.562779 kubelet[2670]: W1123 23:11:06.562732 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.562779 kubelet[2670]: E1123 23:11:06.562745 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.563431 kubelet[2670]: E1123 23:11:06.563317 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.563431 kubelet[2670]: W1123 23:11:06.563333 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.563431 kubelet[2670]: E1123 23:11:06.563344 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.564775 kubelet[2670]: E1123 23:11:06.564753 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.564775 kubelet[2670]: W1123 23:11:06.564771 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.564866 kubelet[2670]: E1123 23:11:06.564784 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.565075 kubelet[2670]: E1123 23:11:06.565061 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.565075 kubelet[2670]: W1123 23:11:06.565075 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.565140 kubelet[2670]: E1123 23:11:06.565086 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.565468 kubelet[2670]: E1123 23:11:06.565450 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.565468 kubelet[2670]: W1123 23:11:06.565466 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.565533 kubelet[2670]: E1123 23:11:06.565482 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.565700 kubelet[2670]: E1123 23:11:06.565686 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.565733 kubelet[2670]: W1123 23:11:06.565700 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.565822 kubelet[2670]: E1123 23:11:06.565709 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.566110 kubelet[2670]: E1123 23:11:06.566094 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.566140 kubelet[2670]: W1123 23:11:06.566111 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.566140 kubelet[2670]: E1123 23:11:06.566136 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.567004 kubelet[2670]: E1123 23:11:06.566986 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.567037 kubelet[2670]: W1123 23:11:06.567005 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.567037 kubelet[2670]: E1123 23:11:06.567019 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.567524 kubelet[2670]: E1123 23:11:06.567336 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.567524 kubelet[2670]: W1123 23:11:06.567351 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.567524 kubelet[2670]: E1123 23:11:06.567361 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.567690 kubelet[2670]: E1123 23:11:06.567672 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.567724 kubelet[2670]: W1123 23:11:06.567708 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.567724 kubelet[2670]: E1123 23:11:06.567720 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.567936 kubelet[2670]: E1123 23:11:06.567924 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.567978 kubelet[2670]: W1123 23:11:06.567937 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.567978 kubelet[2670]: E1123 23:11:06.567946 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.572737 kubelet[2670]: E1123 23:11:06.572715 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.572737 kubelet[2670]: W1123 23:11:06.572733 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.572838 kubelet[2670]: E1123 23:11:06.572750 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.573030 kubelet[2670]: E1123 23:11:06.573018 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.573058 kubelet[2670]: W1123 23:11:06.573047 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.573078 kubelet[2670]: E1123 23:11:06.573058 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.573299 kubelet[2670]: E1123 23:11:06.573287 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.573326 kubelet[2670]: W1123 23:11:06.573299 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.573326 kubelet[2670]: E1123 23:11:06.573308 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.573590 kubelet[2670]: E1123 23:11:06.573565 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.573630 kubelet[2670]: W1123 23:11:06.573590 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.573630 kubelet[2670]: E1123 23:11:06.573604 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.574183 kubelet[2670]: E1123 23:11:06.574165 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.574183 kubelet[2670]: W1123 23:11:06.574182 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.574245 kubelet[2670]: E1123 23:11:06.574194 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.574423 kubelet[2670]: E1123 23:11:06.574411 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.574423 kubelet[2670]: W1123 23:11:06.574423 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.574477 kubelet[2670]: E1123 23:11:06.574433 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.574649 kubelet[2670]: E1123 23:11:06.574634 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.574680 kubelet[2670]: W1123 23:11:06.574648 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.574680 kubelet[2670]: E1123 23:11:06.574659 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.574860 kubelet[2670]: E1123 23:11:06.574837 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.574895 kubelet[2670]: W1123 23:11:06.574861 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.574895 kubelet[2670]: E1123 23:11:06.574872 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.575042 kubelet[2670]: E1123 23:11:06.575023 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.575042 kubelet[2670]: W1123 23:11:06.575035 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.575092 kubelet[2670]: E1123 23:11:06.575044 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.575161 kubelet[2670]: E1123 23:11:06.575150 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.575185 kubelet[2670]: W1123 23:11:06.575161 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.575185 kubelet[2670]: E1123 23:11:06.575170 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.575317 kubelet[2670]: E1123 23:11:06.575305 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.575317 kubelet[2670]: W1123 23:11:06.575316 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.575365 kubelet[2670]: E1123 23:11:06.575325 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.575684 kubelet[2670]: E1123 23:11:06.575665 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.575684 kubelet[2670]: W1123 23:11:06.575679 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.575737 kubelet[2670]: E1123 23:11:06.575690 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.575950 kubelet[2670]: E1123 23:11:06.575935 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.575986 kubelet[2670]: W1123 23:11:06.575950 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.575986 kubelet[2670]: E1123 23:11:06.575963 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.576218 kubelet[2670]: E1123 23:11:06.576201 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.576247 kubelet[2670]: W1123 23:11:06.576218 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.576247 kubelet[2670]: E1123 23:11:06.576232 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.576381 kubelet[2670]: E1123 23:11:06.576362 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.576381 kubelet[2670]: W1123 23:11:06.576373 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.576381 kubelet[2670]: E1123 23:11:06.576381 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.576582 kubelet[2670]: E1123 23:11:06.576549 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.576582 kubelet[2670]: W1123 23:11:06.576557 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.576634 kubelet[2670]: E1123 23:11:06.576588 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.577149 kubelet[2670]: E1123 23:11:06.577127 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.577149 kubelet[2670]: W1123 23:11:06.577145 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.577232 kubelet[2670]: E1123 23:11:06.577158 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.579010 kubelet[2670]: E1123 23:11:06.578992 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:11:06.579057 kubelet[2670]: W1123 23:11:06.579010 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:11:06.579057 kubelet[2670]: E1123 23:11:06.579030 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:11:06.590122 systemd[1]: Started cri-containerd-e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8.scope - libcontainer container e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8. Nov 23 23:11:06.664247 containerd[1503]: time="2025-11-23T23:11:06.664202010Z" level=info msg="StartContainer for \"e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8\" returns successfully" Nov 23 23:11:06.684003 systemd[1]: cri-containerd-e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8.scope: Deactivated successfully. Nov 23 23:11:06.684491 systemd[1]: cri-containerd-e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8.scope: Consumed 34ms CPU time, 6M memory peak, 4.1M written to disk. Nov 23 23:11:06.698544 containerd[1503]: time="2025-11-23T23:11:06.698496934Z" level=info msg="received container exit event container_id:\"e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8\" id:\"e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8\" pid:3394 exited_at:{seconds:1763939466 nanos:693456321}" Nov 23 23:11:06.729184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e918f770357db97a0ae5adc6d5657a6ed1ccfb409a526640b5565043b2ebcfa8-rootfs.mount: Deactivated successfully. Nov 23 23:11:07.428317 kubelet[2670]: E1123 23:11:07.428258 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:07.533791 kubelet[2670]: E1123 23:11:07.533594 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:07.535082 containerd[1503]: time="2025-11-23T23:11:07.535023941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:11:07.538694 kubelet[2670]: I1123 23:11:07.538657 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:11:07.539084 kubelet[2670]: E1123 23:11:07.539067 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:09.428065 kubelet[2670]: E1123 23:11:09.428009 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:10.418164 containerd[1503]: time="2025-11-23T23:11:10.418105878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:10.418855 containerd[1503]: time="2025-11-23T23:11:10.418814506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:11:10.419934 containerd[1503]: time="2025-11-23T23:11:10.419849786Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:10.422120 containerd[1503]: time="2025-11-23T23:11:10.422030871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:10.423011 containerd[1503]: time="2025-11-23T23:11:10.422870303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.88778664s" Nov 23 23:11:10.423011 containerd[1503]: time="2025-11-23T23:11:10.422918945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:11:10.428956 containerd[1503]: time="2025-11-23T23:11:10.428895177Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:11:10.442699 containerd[1503]: time="2025-11-23T23:11:10.442645311Z" level=info msg="Container 7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:10.453351 containerd[1503]: time="2025-11-23T23:11:10.453296365Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083\"" Nov 23 23:11:10.453882 containerd[1503]: time="2025-11-23T23:11:10.453835505Z" level=info msg="StartContainer for \"7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083\"" Nov 23 23:11:10.455835 containerd[1503]: time="2025-11-23T23:11:10.455779341Z" level=info msg="connecting to shim 7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083" address="unix:///run/containerd/s/8249b8c686454ec5ad815feaddabba6dddae996918dfe74499bc19b9ed23f44a" protocol=ttrpc version=3 Nov 23 23:11:10.484112 systemd[1]: Started cri-containerd-7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083.scope - libcontainer container 7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083. Nov 23 23:11:10.586546 containerd[1503]: time="2025-11-23T23:11:10.586474655Z" level=info msg="StartContainer for \"7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083\" returns successfully" Nov 23 23:11:11.222444 systemd[1]: cri-containerd-7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083.scope: Deactivated successfully. Nov 23 23:11:11.224016 systemd[1]: cri-containerd-7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083.scope: Consumed 502ms CPU time, 177.7M memory peak, 2.1M read from disk, 165.9M written to disk. Nov 23 23:11:11.226425 containerd[1503]: time="2025-11-23T23:11:11.226377033Z" level=info msg="received container exit event container_id:\"7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083\" id:\"7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083\" pid:3454 exited_at:{seconds:1763939471 nanos:226033740}" Nov 23 23:11:11.256167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7947ac582bb4cb6c4e0bcdbb876b7de05e2b9ff16af88c0734e48ca457254083-rootfs.mount: Deactivated successfully. Nov 23 23:11:11.313883 kubelet[2670]: I1123 23:11:11.313834 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:11:11.387604 systemd[1]: Created slice kubepods-besteffort-pod0acb505e_a17b_4491_947a_c19d317242d7.slice - libcontainer container kubepods-besteffort-pod0acb505e_a17b_4491_947a_c19d317242d7.slice. Nov 23 23:11:11.403281 kubelet[2670]: I1123 23:11:11.403108 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf-calico-apiserver-certs\") pod \"calico-apiserver-8f88d7d4b-sgw76\" (UID: \"ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf\") " pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" Nov 23 23:11:11.403281 kubelet[2670]: I1123 23:11:11.403250 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnjk\" (UniqueName: \"kubernetes.io/projected/5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f-kube-api-access-vlnjk\") pod \"coredns-674b8bbfcf-hs4gj\" (UID: \"5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f\") " pod="kube-system/coredns-674b8bbfcf-hs4gj" Nov 23 23:11:11.403574 kubelet[2670]: I1123 23:11:11.403495 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvd2\" (UniqueName: \"kubernetes.io/projected/1224dea9-f06f-4c95-9025-b816274bdaf1-kube-api-access-ssvd2\") pod \"whisker-d6c6f4d9b-6r4v2\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " pod="calico-system/whisker-d6c6f4d9b-6r4v2" Nov 23 23:11:11.403785 kubelet[2670]: I1123 23:11:11.403530 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b221c963-4636-4d56-a9f8-962285b56868-goldmane-ca-bundle\") pod \"goldmane-666569f655-zpgkw\" (UID: \"b221c963-4636-4d56-a9f8-962285b56868\") " pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.403785 kubelet[2670]: I1123 23:11:11.403745 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b221c963-4636-4d56-a9f8-962285b56868-goldmane-key-pair\") pod \"goldmane-666569f655-zpgkw\" (UID: \"b221c963-4636-4d56-a9f8-962285b56868\") " pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.404032 kubelet[2670]: I1123 23:11:11.404015 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-backend-key-pair\") pod \"whisker-d6c6f4d9b-6r4v2\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " pod="calico-system/whisker-d6c6f4d9b-6r4v2" Nov 23 23:11:11.404173 kubelet[2670]: I1123 23:11:11.404106 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw9d8\" (UniqueName: \"kubernetes.io/projected/b221c963-4636-4d56-a9f8-962285b56868-kube-api-access-rw9d8\") pod \"goldmane-666569f655-zpgkw\" (UID: \"b221c963-4636-4d56-a9f8-962285b56868\") " pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.404213 kubelet[2670]: I1123 23:11:11.404160 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmp9l\" (UniqueName: \"kubernetes.io/projected/0acb505e-a17b-4491-947a-c19d317242d7-kube-api-access-lmp9l\") pod \"calico-kube-controllers-5f6945d6f6-zn6lq\" (UID: \"0acb505e-a17b-4491-947a-c19d317242d7\") " pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" Nov 23 23:11:11.404213 kubelet[2670]: I1123 23:11:11.404206 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4t5\" (UniqueName: \"kubernetes.io/projected/7a313c8b-7ee3-4600-9a4c-1ba94b048ba2-kube-api-access-fc4t5\") pod \"coredns-674b8bbfcf-7jgpd\" (UID: \"7a313c8b-7ee3-4600-9a4c-1ba94b048ba2\") " pod="kube-system/coredns-674b8bbfcf-7jgpd" Nov 23 23:11:11.404259 kubelet[2670]: I1123 23:11:11.404231 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0acb505e-a17b-4491-947a-c19d317242d7-tigera-ca-bundle\") pod \"calico-kube-controllers-5f6945d6f6-zn6lq\" (UID: \"0acb505e-a17b-4491-947a-c19d317242d7\") " pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" Nov 23 23:11:11.404259 kubelet[2670]: I1123 23:11:11.404255 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a313c8b-7ee3-4600-9a4c-1ba94b048ba2-config-volume\") pod \"coredns-674b8bbfcf-7jgpd\" (UID: \"7a313c8b-7ee3-4600-9a4c-1ba94b048ba2\") " pod="kube-system/coredns-674b8bbfcf-7jgpd" Nov 23 23:11:11.404304 kubelet[2670]: I1123 23:11:11.404277 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/412b646c-6eab-4135-aded-f9c2d582e297-calico-apiserver-certs\") pod \"calico-apiserver-8f88d7d4b-clgpg\" (UID: \"412b646c-6eab-4135-aded-f9c2d582e297\") " pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" Nov 23 23:11:11.404304 kubelet[2670]: I1123 23:11:11.404294 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvq7r\" (UniqueName: \"kubernetes.io/projected/412b646c-6eab-4135-aded-f9c2d582e297-kube-api-access-fvq7r\") pod \"calico-apiserver-8f88d7d4b-clgpg\" (UID: \"412b646c-6eab-4135-aded-f9c2d582e297\") " pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" Nov 23 23:11:11.404349 kubelet[2670]: I1123 23:11:11.404316 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-ca-bundle\") pod \"whisker-d6c6f4d9b-6r4v2\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " pod="calico-system/whisker-d6c6f4d9b-6r4v2" Nov 23 23:11:11.404349 kubelet[2670]: I1123 23:11:11.404338 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b221c963-4636-4d56-a9f8-962285b56868-config\") pod \"goldmane-666569f655-zpgkw\" (UID: \"b221c963-4636-4d56-a9f8-962285b56868\") " pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.404393 kubelet[2670]: I1123 23:11:11.404368 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f-config-volume\") pod \"coredns-674b8bbfcf-hs4gj\" (UID: \"5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f\") " pod="kube-system/coredns-674b8bbfcf-hs4gj" Nov 23 23:11:11.404418 kubelet[2670]: I1123 23:11:11.404393 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkg6\" (UniqueName: \"kubernetes.io/projected/ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf-kube-api-access-nwkg6\") pod \"calico-apiserver-8f88d7d4b-sgw76\" (UID: \"ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf\") " pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" Nov 23 23:11:11.410301 systemd[1]: Created slice kubepods-burstable-pod7a313c8b_7ee3_4600_9a4c_1ba94b048ba2.slice - libcontainer container kubepods-burstable-pod7a313c8b_7ee3_4600_9a4c_1ba94b048ba2.slice. Nov 23 23:11:11.421078 systemd[1]: Created slice kubepods-burstable-pod5a2a75ff_afb7_4607_a3ed_9e9e8f13a46f.slice - libcontainer container kubepods-burstable-pod5a2a75ff_afb7_4607_a3ed_9e9e8f13a46f.slice. Nov 23 23:11:11.430011 systemd[1]: Created slice kubepods-besteffort-pod1224dea9_f06f_4c95_9025_b816274bdaf1.slice - libcontainer container kubepods-besteffort-pod1224dea9_f06f_4c95_9025_b816274bdaf1.slice. Nov 23 23:11:11.437047 systemd[1]: Created slice kubepods-besteffort-pod412b646c_6eab_4135_aded_f9c2d582e297.slice - libcontainer container kubepods-besteffort-pod412b646c_6eab_4135_aded_f9c2d582e297.slice. Nov 23 23:11:11.447385 systemd[1]: Created slice kubepods-besteffort-podb221c963_4636_4d56_a9f8_962285b56868.slice - libcontainer container kubepods-besteffort-podb221c963_4636_4d56_a9f8_962285b56868.slice. Nov 23 23:11:11.455564 systemd[1]: Created slice kubepods-besteffort-podec8bcf17_8d1d_4b90_9b92_408df6d5c1bf.slice - libcontainer container kubepods-besteffort-podec8bcf17_8d1d_4b90_9b92_408df6d5c1bf.slice. Nov 23 23:11:11.465354 systemd[1]: Created slice kubepods-besteffort-pod02b80ccd_71ac_4684_b4ef_36bab9efb9cc.slice - libcontainer container kubepods-besteffort-pod02b80ccd_71ac_4684_b4ef_36bab9efb9cc.slice. Nov 23 23:11:11.468985 containerd[1503]: time="2025-11-23T23:11:11.468638891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tmhj,Uid:02b80ccd-71ac-4684-b4ef-36bab9efb9cc,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:11.554419 kubelet[2670]: E1123 23:11:11.554022 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:11.555454 containerd[1503]: time="2025-11-23T23:11:11.555296725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:11:11.614665 containerd[1503]: time="2025-11-23T23:11:11.614603884Z" level=error msg="Failed to destroy network for sandbox \"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.615854 containerd[1503]: time="2025-11-23T23:11:11.615801607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tmhj,Uid:02b80ccd-71ac-4684-b4ef-36bab9efb9cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.620668 kubelet[2670]: E1123 23:11:11.620592 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.620799 kubelet[2670]: E1123 23:11:11.620704 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:11.620799 kubelet[2670]: E1123 23:11:11.620729 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tmhj" Nov 23 23:11:11.620865 kubelet[2670]: E1123 23:11:11.620819 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"686a13e5e73b05033d4122c08cf9c0c242a1cc0f2f0fdc2d837278a4dcc44458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:11.699536 containerd[1503]: time="2025-11-23T23:11:11.699485973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6945d6f6-zn6lq,Uid:0acb505e-a17b-4491-947a-c19d317242d7,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:11.716041 kubelet[2670]: E1123 23:11:11.715374 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:11.716612 containerd[1503]: time="2025-11-23T23:11:11.716572275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jgpd,Uid:7a313c8b-7ee3-4600-9a4c-1ba94b048ba2,Namespace:kube-system,Attempt:0,}" Nov 23 23:11:11.724488 kubelet[2670]: E1123 23:11:11.724449 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:11.725285 containerd[1503]: time="2025-11-23T23:11:11.725200549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs4gj,Uid:5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f,Namespace:kube-system,Attempt:0,}" Nov 23 23:11:11.737493 containerd[1503]: time="2025-11-23T23:11:11.737442035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c6f4d9b-6r4v2,Uid:1224dea9-f06f-4c95-9025-b816274bdaf1,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:11.747300 containerd[1503]: time="2025-11-23T23:11:11.747255752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-clgpg,Uid:412b646c-6eab-4135-aded-f9c2d582e297,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:11:11.751706 containerd[1503]: time="2025-11-23T23:11:11.751648952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zpgkw,Uid:b221c963-4636-4d56-a9f8-962285b56868,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:11.762596 containerd[1503]: time="2025-11-23T23:11:11.762303780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-sgw76,Uid:ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:11:11.772998 containerd[1503]: time="2025-11-23T23:11:11.772933807Z" level=error msg="Failed to destroy network for sandbox \"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.784456 containerd[1503]: time="2025-11-23T23:11:11.784385184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6945d6f6-zn6lq,Uid:0acb505e-a17b-4491-947a-c19d317242d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.784697 kubelet[2670]: E1123 23:11:11.784652 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.784794 kubelet[2670]: E1123 23:11:11.784717 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" Nov 23 23:11:11.784794 kubelet[2670]: E1123 23:11:11.784739 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" Nov 23 23:11:11.784849 kubelet[2670]: E1123 23:11:11.784796 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f6945d6f6-zn6lq_calico-system(0acb505e-a17b-4491-947a-c19d317242d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f6945d6f6-zn6lq_calico-system(0acb505e-a17b-4491-947a-c19d317242d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eda0cc8aa95ce615941e8db3ad33b516fbd6096c703ef742e2530aa2c10ca314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:11.820283 containerd[1503]: time="2025-11-23T23:11:11.820159126Z" level=error msg="Failed to destroy network for sandbox \"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.825888 containerd[1503]: time="2025-11-23T23:11:11.825698967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs4gj,Uid:5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.826085 kubelet[2670]: E1123 23:11:11.826020 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.826164 kubelet[2670]: E1123 23:11:11.826082 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hs4gj" Nov 23 23:11:11.826200 kubelet[2670]: E1123 23:11:11.826168 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hs4gj" Nov 23 23:11:11.826311 kubelet[2670]: E1123 23:11:11.826274 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hs4gj_kube-system(5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hs4gj_kube-system(5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82e54c02088e6fcf9cb5eb29268304c2cdc74b2ab82e5c29421f5848c25d0c30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hs4gj" podUID="5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f" Nov 23 23:11:11.837199 containerd[1503]: time="2025-11-23T23:11:11.837025100Z" level=error msg="Failed to destroy network for sandbox \"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.840002 containerd[1503]: time="2025-11-23T23:11:11.839614274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jgpd,Uid:7a313c8b-7ee3-4600-9a4c-1ba94b048ba2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.840283 kubelet[2670]: E1123 23:11:11.840223 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.840354 kubelet[2670]: E1123 23:11:11.840301 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7jgpd" Nov 23 23:11:11.840354 kubelet[2670]: E1123 23:11:11.840335 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7jgpd" Nov 23 23:11:11.840432 kubelet[2670]: E1123 23:11:11.840386 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7jgpd_kube-system(7a313c8b-7ee3-4600-9a4c-1ba94b048ba2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7jgpd_kube-system(7a313c8b-7ee3-4600-9a4c-1ba94b048ba2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"699d68916d2515718ceb5e956141804325b63b91a2618cdacf3deb2b3a448e09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7jgpd" podUID="7a313c8b-7ee3-4600-9a4c-1ba94b048ba2" Nov 23 23:11:11.858522 containerd[1503]: time="2025-11-23T23:11:11.858361716Z" level=error msg="Failed to destroy network for sandbox \"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.860264 containerd[1503]: time="2025-11-23T23:11:11.860223064Z" level=error msg="Failed to destroy network for sandbox \"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.863482 containerd[1503]: time="2025-11-23T23:11:11.863422181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c6f4d9b-6r4v2,Uid:1224dea9-f06f-4c95-9025-b816274bdaf1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.863829 containerd[1503]: time="2025-11-23T23:11:11.863709391Z" level=error msg="Failed to destroy network for sandbox \"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.863919 kubelet[2670]: E1123 23:11:11.863729 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.863919 kubelet[2670]: E1123 23:11:11.863796 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c6f4d9b-6r4v2" Nov 23 23:11:11.863919 kubelet[2670]: E1123 23:11:11.863822 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c6f4d9b-6r4v2" Nov 23 23:11:11.864174 kubelet[2670]: E1123 23:11:11.863872 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d6c6f4d9b-6r4v2_calico-system(1224dea9-f06f-4c95-9025-b816274bdaf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d6c6f4d9b-6r4v2_calico-system(1224dea9-f06f-4c95-9025-b816274bdaf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7e6c719898b5d4b8cd2075a374515ab0747df8e0ba5fe4c782630a86506ee9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d6c6f4d9b-6r4v2" podUID="1224dea9-f06f-4c95-9025-b816274bdaf1" Nov 23 23:11:11.866259 containerd[1503]: time="2025-11-23T23:11:11.866162080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-sgw76,Uid:ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.866463 kubelet[2670]: E1123 23:11:11.866426 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.866628 kubelet[2670]: E1123 23:11:11.866482 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" Nov 23 23:11:11.866628 kubelet[2670]: E1123 23:11:11.866501 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" Nov 23 23:11:11.866628 kubelet[2670]: E1123 23:11:11.866557 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f88d7d4b-sgw76_calico-apiserver(ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f88d7d4b-sgw76_calico-apiserver(ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c79376eb949c5353397eb24126214be19eb380de55be660286fb9e58120122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:11.867636 containerd[1503]: time="2025-11-23T23:11:11.867587172Z" level=error msg="Failed to destroy network for sandbox \"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.868375 containerd[1503]: time="2025-11-23T23:11:11.868288638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-clgpg,Uid:412b646c-6eab-4135-aded-f9c2d582e297,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.868782 kubelet[2670]: E1123 23:11:11.868739 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.868838 kubelet[2670]: E1123 23:11:11.868800 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" Nov 23 23:11:11.868865 kubelet[2670]: E1123 23:11:11.868821 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" Nov 23 23:11:11.868917 kubelet[2670]: E1123 23:11:11.868879 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f88d7d4b-clgpg_calico-apiserver(412b646c-6eab-4135-aded-f9c2d582e297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f88d7d4b-clgpg_calico-apiserver(412b646c-6eab-4135-aded-f9c2d582e297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235718ed30dbf04f2e7d5a2f6b8f2225c9678cf8dca2b3b6357ec71cf610dfb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:11.869630 containerd[1503]: time="2025-11-23T23:11:11.869589685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zpgkw,Uid:b221c963-4636-4d56-a9f8-962285b56868,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.869860 kubelet[2670]: E1123 23:11:11.869822 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:11:11.869930 kubelet[2670]: E1123 23:11:11.869882 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.870086 kubelet[2670]: E1123 23:11:11.870057 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zpgkw" Nov 23 23:11:11.870156 kubelet[2670]: E1123 23:11:11.870131 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zpgkw_calico-system(b221c963-4636-4d56-a9f8-962285b56868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zpgkw_calico-system(b221c963-4636-4d56-a9f8-962285b56868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ada9436ee2d6464c43ae9cdd7988bbd5270e184e6d2e432e7dcc65bd85436f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:12.477781 systemd[1]: run-netns-cni\x2d82b6a940\x2d6e4b\x2d6826\x2d0b49\x2d6aed4654a8a3.mount: Deactivated successfully. Nov 23 23:11:14.766237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377403684.mount: Deactivated successfully. Nov 23 23:11:15.090663 containerd[1503]: time="2025-11-23T23:11:15.090617958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:15.091709 containerd[1503]: time="2025-11-23T23:11:15.091502344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:11:15.092675 containerd[1503]: time="2025-11-23T23:11:15.092634058Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:15.095969 containerd[1503]: time="2025-11-23T23:11:15.095927878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:11:15.096809 containerd[1503]: time="2025-11-23T23:11:15.096773903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.541422736s" Nov 23 23:11:15.096940 containerd[1503]: time="2025-11-23T23:11:15.096920347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:11:15.116527 containerd[1503]: time="2025-11-23T23:11:15.116481057Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:11:15.127128 containerd[1503]: time="2025-11-23T23:11:15.127055135Z" level=info msg="Container fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:15.141967 containerd[1503]: time="2025-11-23T23:11:15.141888382Z" level=info msg="CreateContainer within sandbox \"35b3cc08cdeaa25401994eeaa80ef5ddd616c3e76de607e91cb618e41d414771\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af\"" Nov 23 23:11:15.142447 containerd[1503]: time="2025-11-23T23:11:15.142409838Z" level=info msg="StartContainer for \"fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af\"" Nov 23 23:11:15.144122 containerd[1503]: time="2025-11-23T23:11:15.144090528Z" level=info msg="connecting to shim fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af" address="unix:///run/containerd/s/8249b8c686454ec5ad815feaddabba6dddae996918dfe74499bc19b9ed23f44a" protocol=ttrpc version=3 Nov 23 23:11:15.178185 systemd[1]: Started cri-containerd-fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af.scope - libcontainer container fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af. Nov 23 23:11:15.312971 containerd[1503]: time="2025-11-23T23:11:15.312924214Z" level=info msg="StartContainer for \"fb9c8de8408e4abc1d1a1978c96988bb9f83b519cc5007b1b26a5d7f182458af\" returns successfully" Nov 23 23:11:15.453159 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:11:15.453290 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:11:15.585924 kubelet[2670]: E1123 23:11:15.585573 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:15.616333 kubelet[2670]: I1123 23:11:15.613522 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p89th" podStartSLOduration=1.6711554579999999 podStartE2EDuration="12.613502229s" podCreationTimestamp="2025-11-23 23:11:03 +0000 UTC" firstStartedPulling="2025-11-23 23:11:04.155832974 +0000 UTC m=+23.837937953" lastFinishedPulling="2025-11-23 23:11:15.098179785 +0000 UTC m=+34.780284724" observedRunningTime="2025-11-23 23:11:15.610362134 +0000 UTC m=+35.292467113" watchObservedRunningTime="2025-11-23 23:11:15.613502229 +0000 UTC m=+35.295607208" Nov 23 23:11:15.734416 kubelet[2670]: I1123 23:11:15.734121 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-ca-bundle\") pod \"1224dea9-f06f-4c95-9025-b816274bdaf1\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " Nov 23 23:11:15.734416 kubelet[2670]: I1123 23:11:15.734174 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssvd2\" (UniqueName: \"kubernetes.io/projected/1224dea9-f06f-4c95-9025-b816274bdaf1-kube-api-access-ssvd2\") pod \"1224dea9-f06f-4c95-9025-b816274bdaf1\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " Nov 23 23:11:15.734416 kubelet[2670]: I1123 23:11:15.734199 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-backend-key-pair\") pod \"1224dea9-f06f-4c95-9025-b816274bdaf1\" (UID: \"1224dea9-f06f-4c95-9025-b816274bdaf1\") " Nov 23 23:11:15.748173 kubelet[2670]: I1123 23:11:15.748114 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1224dea9-f06f-4c95-9025-b816274bdaf1-kube-api-access-ssvd2" (OuterVolumeSpecName: "kube-api-access-ssvd2") pod "1224dea9-f06f-4c95-9025-b816274bdaf1" (UID: "1224dea9-f06f-4c95-9025-b816274bdaf1"). InnerVolumeSpecName "kube-api-access-ssvd2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:11:15.748326 kubelet[2670]: I1123 23:11:15.748269 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1224dea9-f06f-4c95-9025-b816274bdaf1" (UID: "1224dea9-f06f-4c95-9025-b816274bdaf1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:11:15.748618 kubelet[2670]: I1123 23:11:15.748574 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1224dea9-f06f-4c95-9025-b816274bdaf1" (UID: "1224dea9-f06f-4c95-9025-b816274bdaf1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:11:15.767719 systemd[1]: var-lib-kubelet-pods-1224dea9\x2df06f\x2d4c95\x2d9025\x2db816274bdaf1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssvd2.mount: Deactivated successfully. Nov 23 23:11:15.767827 systemd[1]: var-lib-kubelet-pods-1224dea9\x2df06f\x2d4c95\x2d9025\x2db816274bdaf1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:11:15.835268 kubelet[2670]: I1123 23:11:15.835204 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssvd2\" (UniqueName: \"kubernetes.io/projected/1224dea9-f06f-4c95-9025-b816274bdaf1-kube-api-access-ssvd2\") on node \"localhost\" DevicePath \"\"" Nov 23 23:11:15.835268 kubelet[2670]: I1123 23:11:15.835243 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 23 23:11:15.835268 kubelet[2670]: I1123 23:11:15.835253 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1224dea9-f06f-4c95-9025-b816274bdaf1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 23 23:11:16.440226 systemd[1]: Removed slice kubepods-besteffort-pod1224dea9_f06f_4c95_9025_b816274bdaf1.slice - libcontainer container kubepods-besteffort-pod1224dea9_f06f_4c95_9025_b816274bdaf1.slice. Nov 23 23:11:16.587439 kubelet[2670]: E1123 23:11:16.587396 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:16.718100 systemd[1]: Created slice kubepods-besteffort-pod03841c62_c516_4f22_ae1c_acb3dc1c42a5.slice - libcontainer container kubepods-besteffort-pod03841c62_c516_4f22_ae1c_acb3dc1c42a5.slice. Nov 23 23:11:16.741725 kubelet[2670]: I1123 23:11:16.741649 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03841c62-c516-4f22-ae1c-acb3dc1c42a5-whisker-ca-bundle\") pod \"whisker-8464bc68f8-w7h5q\" (UID: \"03841c62-c516-4f22-ae1c-acb3dc1c42a5\") " pod="calico-system/whisker-8464bc68f8-w7h5q" Nov 23 23:11:16.741725 kubelet[2670]: I1123 23:11:16.741696 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rttwd\" (UniqueName: \"kubernetes.io/projected/03841c62-c516-4f22-ae1c-acb3dc1c42a5-kube-api-access-rttwd\") pod \"whisker-8464bc68f8-w7h5q\" (UID: \"03841c62-c516-4f22-ae1c-acb3dc1c42a5\") " pod="calico-system/whisker-8464bc68f8-w7h5q" Nov 23 23:11:16.741966 kubelet[2670]: I1123 23:11:16.741777 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03841c62-c516-4f22-ae1c-acb3dc1c42a5-whisker-backend-key-pair\") pod \"whisker-8464bc68f8-w7h5q\" (UID: \"03841c62-c516-4f22-ae1c-acb3dc1c42a5\") " pod="calico-system/whisker-8464bc68f8-w7h5q" Nov 23 23:11:17.022539 containerd[1503]: time="2025-11-23T23:11:17.022234200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8464bc68f8-w7h5q,Uid:03841c62-c516-4f22-ae1c-acb3dc1c42a5,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:17.247185 systemd-networkd[1438]: calic5ad0d0375d: Link UP Nov 23 23:11:17.248109 systemd-networkd[1438]: calic5ad0d0375d: Gained carrier Nov 23 23:11:17.265965 containerd[1503]: 2025-11-23 23:11:17.067 [INFO][3982] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:17.265965 containerd[1503]: 2025-11-23 23:11:17.103 [INFO][3982] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8464bc68f8--w7h5q-eth0 whisker-8464bc68f8- calico-system 03841c62-c516-4f22-ae1c-acb3dc1c42a5 886 0 2025-11-23 23:11:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8464bc68f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8464bc68f8-w7h5q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic5ad0d0375d [] [] }} ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-" Nov 23 23:11:17.265965 containerd[1503]: 2025-11-23 23:11:17.103 [INFO][3982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.265965 containerd[1503]: 2025-11-23 23:11:17.195 [INFO][3997] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" HandleID="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Workload="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.195 [INFO][3997] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" HandleID="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Workload="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8464bc68f8-w7h5q", "timestamp":"2025-11-23 23:11:17.195529737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.195 [INFO][3997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.195 [INFO][3997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.195 [INFO][3997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.207 [INFO][3997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" host="localhost" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.214 [INFO][3997] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.219 [INFO][3997] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.221 [INFO][3997] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.224 [INFO][3997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:17.266241 containerd[1503]: 2025-11-23 23:11:17.224 [INFO][3997] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" host="localhost" Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.225 [INFO][3997] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9 Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.230 [INFO][3997] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" host="localhost" Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.237 [INFO][3997] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" host="localhost" Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.237 [INFO][3997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" host="localhost" Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.237 [INFO][3997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:17.266449 containerd[1503]: 2025-11-23 23:11:17.237 [INFO][3997] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" HandleID="k8s-pod-network.37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Workload="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.267024 containerd[1503]: 2025-11-23 23:11:17.240 [INFO][3982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8464bc68f8--w7h5q-eth0", GenerateName:"whisker-8464bc68f8-", Namespace:"calico-system", SelfLink:"", UID:"03841c62-c516-4f22-ae1c-acb3dc1c42a5", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8464bc68f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8464bc68f8-w7h5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5ad0d0375d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:17.267024 containerd[1503]: 2025-11-23 23:11:17.240 [INFO][3982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.267110 containerd[1503]: 2025-11-23 23:11:17.240 [INFO][3982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5ad0d0375d ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.267110 containerd[1503]: 2025-11-23 23:11:17.248 [INFO][3982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.267152 containerd[1503]: 2025-11-23 23:11:17.250 [INFO][3982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8464bc68f8--w7h5q-eth0", GenerateName:"whisker-8464bc68f8-", Namespace:"calico-system", SelfLink:"", UID:"03841c62-c516-4f22-ae1c-acb3dc1c42a5", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8464bc68f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9", Pod:"whisker-8464bc68f8-w7h5q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5ad0d0375d", MAC:"da:27:ec:0d:42:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:17.267199 containerd[1503]: 2025-11-23 23:11:17.260 [INFO][3982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" Namespace="calico-system" Pod="whisker-8464bc68f8-w7h5q" WorkloadEndpoint="localhost-k8s-whisker--8464bc68f8--w7h5q-eth0" Nov 23 23:11:17.329484 containerd[1503]: time="2025-11-23T23:11:17.329356990Z" level=info msg="connecting to shim 37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9" address="unix:///run/containerd/s/d179888d7435607105f80c750d0ddf3fded463c1c94b43f251738f7fe2470a0e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:17.358100 systemd[1]: Started cri-containerd-37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9.scope - libcontainer container 37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9. Nov 23 23:11:17.369929 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:17.390124 containerd[1503]: time="2025-11-23T23:11:17.390075480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8464bc68f8-w7h5q,Uid:03841c62-c516-4f22-ae1c-acb3dc1c42a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"37da522b27e9689b2caba9d49e4925bae08698b2624ff959d1fe9fc4a98866a9\"" Nov 23 23:11:17.391681 containerd[1503]: time="2025-11-23T23:11:17.391490000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:11:17.596519 containerd[1503]: time="2025-11-23T23:11:17.596455319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:17.598093 containerd[1503]: time="2025-11-23T23:11:17.597951042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:11:17.598093 containerd[1503]: time="2025-11-23T23:11:17.597957162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:11:17.606399 kubelet[2670]: E1123 23:11:17.606279 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:17.608306 kubelet[2670]: E1123 23:11:17.608252 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:17.616833 kubelet[2670]: E1123 23:11:17.616727 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9542fe333b649a49bebbed2ee2383fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:17.619930 containerd[1503]: time="2025-11-23T23:11:17.619381172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:11:17.821891 containerd[1503]: time="2025-11-23T23:11:17.821692936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:17.822910 containerd[1503]: time="2025-11-23T23:11:17.822750046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:11:17.822910 containerd[1503]: time="2025-11-23T23:11:17.822815208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:11:17.823255 kubelet[2670]: E1123 23:11:17.823207 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:17.823324 kubelet[2670]: E1123 23:11:17.823260 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:17.823478 kubelet[2670]: E1123 23:11:17.823378 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:17.825033 kubelet[2670]: E1123 23:11:17.824956 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8464bc68f8-w7h5q" podUID="03841c62-c516-4f22-ae1c-acb3dc1c42a5" Nov 23 23:11:18.430389 kubelet[2670]: I1123 23:11:18.430204 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1224dea9-f06f-4c95-9025-b816274bdaf1" path="/var/lib/kubelet/pods/1224dea9-f06f-4c95-9025-b816274bdaf1/volumes" Nov 23 23:11:18.596481 kubelet[2670]: E1123 23:11:18.596434 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8464bc68f8-w7h5q" podUID="03841c62-c516-4f22-ae1c-acb3dc1c42a5" Nov 23 23:11:18.703104 systemd-networkd[1438]: calic5ad0d0375d: Gained IPv6LL Nov 23 23:11:23.108868 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:34790.service - OpenSSH per-connection server daemon (10.0.0.1:34790). Nov 23 23:11:23.180998 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 34790 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:23.182642 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:23.187512 systemd-logind[1483]: New session 8 of user core. Nov 23 23:11:23.197158 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:11:23.354406 sshd[4192]: Connection closed by 10.0.0.1 port 34790 Nov 23 23:11:23.354761 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:23.358571 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:34790.service: Deactivated successfully. Nov 23 23:11:23.361549 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:11:23.362790 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:11:23.364645 systemd-logind[1483]: Removed session 8. Nov 23 23:11:23.429083 containerd[1503]: time="2025-11-23T23:11:23.429028492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tmhj,Uid:02b80ccd-71ac-4684-b4ef-36bab9efb9cc,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:23.560056 systemd-networkd[1438]: cali774ec094667: Link UP Nov 23 23:11:23.561086 systemd-networkd[1438]: cali774ec094667: Gained carrier Nov 23 23:11:23.581104 containerd[1503]: 2025-11-23 23:11:23.455 [INFO][4207] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:23.581104 containerd[1503]: 2025-11-23 23:11:23.475 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2tmhj-eth0 csi-node-driver- calico-system 02b80ccd-71ac-4684-b4ef-36bab9efb9cc 722 0 2025-11-23 23:11:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2tmhj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali774ec094667 [] [] }} ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-" Nov 23 23:11:23.581104 containerd[1503]: 2025-11-23 23:11:23.475 [INFO][4207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581104 containerd[1503]: 2025-11-23 23:11:23.508 [INFO][4223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" HandleID="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Workload="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.508 [INFO][4223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" HandleID="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Workload="localhost-k8s-csi--node--driver--2tmhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2tmhj", "timestamp":"2025-11-23 23:11:23.508055881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.508 [INFO][4223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.508 [INFO][4223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.508 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.521 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" host="localhost" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.525 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.530 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.533 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.536 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:23.581368 containerd[1503]: 2025-11-23 23:11:23.537 [INFO][4223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" host="localhost" Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.540 [INFO][4223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.547 [INFO][4223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" host="localhost" Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.553 [INFO][4223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" host="localhost" Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.553 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" host="localhost" Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.553 [INFO][4223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:23.581617 containerd[1503]: 2025-11-23 23:11:23.553 [INFO][4223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" HandleID="k8s-pod-network.0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Workload="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581729 containerd[1503]: 2025-11-23 23:11:23.555 [INFO][4207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2tmhj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b80ccd-71ac-4684-b4ef-36bab9efb9cc", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2tmhj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali774ec094667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:23.581785 containerd[1503]: 2025-11-23 23:11:23.555 [INFO][4207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581785 containerd[1503]: 2025-11-23 23:11:23.555 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali774ec094667 ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581785 containerd[1503]: 2025-11-23 23:11:23.560 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.581843 containerd[1503]: 2025-11-23 23:11:23.560 [INFO][4207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2tmhj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02b80ccd-71ac-4684-b4ef-36bab9efb9cc", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d", Pod:"csi-node-driver-2tmhj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali774ec094667", MAC:"ca:9b:1f:e0:f0:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:23.581894 containerd[1503]: 2025-11-23 23:11:23.578 [INFO][4207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" Namespace="calico-system" Pod="csi-node-driver-2tmhj" WorkloadEndpoint="localhost-k8s-csi--node--driver--2tmhj-eth0" Nov 23 23:11:23.613597 containerd[1503]: time="2025-11-23T23:11:23.613267823Z" level=info msg="connecting to shim 0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d" address="unix:///run/containerd/s/118aa44cbe97f455631373167c2ae200dbb51f7c64230838bed311e09aef5410" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:23.644181 systemd[1]: Started cri-containerd-0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d.scope - libcontainer container 0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d. Nov 23 23:11:23.662996 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:23.688775 containerd[1503]: time="2025-11-23T23:11:23.688650965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tmhj,Uid:02b80ccd-71ac-4684-b4ef-36bab9efb9cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ed756ef18bfa69db6b06d981b095139b8e9db0ae6b1623ac1119f249b4a0d7d\"" Nov 23 23:11:23.690922 containerd[1503]: time="2025-11-23T23:11:23.690844218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:11:23.902719 containerd[1503]: time="2025-11-23T23:11:23.902586734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:23.904299 containerd[1503]: time="2025-11-23T23:11:23.904235134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:11:23.904374 containerd[1503]: time="2025-11-23T23:11:23.904329896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:11:23.904557 kubelet[2670]: E1123 23:11:23.904501 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:11:23.904875 kubelet[2670]: E1123 23:11:23.904572 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:11:23.904875 kubelet[2670]: E1123 23:11:23.904745 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zpkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:23.907463 containerd[1503]: time="2025-11-23T23:11:23.907245486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:11:24.122465 containerd[1503]: time="2025-11-23T23:11:24.122397847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:24.123357 containerd[1503]: time="2025-11-23T23:11:24.123319788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:11:24.123757 containerd[1503]: time="2025-11-23T23:11:24.123357069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:11:24.123812 kubelet[2670]: E1123 23:11:24.123536 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:11:24.123812 kubelet[2670]: E1123 23:11:24.123589 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:11:24.123937 kubelet[2670]: E1123 23:11:24.123714 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zpkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:24.125985 kubelet[2670]: E1123 23:11:24.125932 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:24.428585 kubelet[2670]: E1123 23:11:24.428519 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:24.429175 containerd[1503]: time="2025-11-23T23:11:24.429053258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jgpd,Uid:7a313c8b-7ee3-4600-9a4c-1ba94b048ba2,Namespace:kube-system,Attempt:0,}" Nov 23 23:11:24.429655 containerd[1503]: time="2025-11-23T23:11:24.429557670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zpgkw,Uid:b221c963-4636-4d56-a9f8-962285b56868,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:24.429700 kubelet[2670]: E1123 23:11:24.429261 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:24.429734 containerd[1503]: time="2025-11-23T23:11:24.429680393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs4gj,Uid:5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f,Namespace:kube-system,Attempt:0,}" Nov 23 23:11:24.605758 systemd-networkd[1438]: cali1f8050e3d07: Link UP Nov 23 23:11:24.606031 systemd-networkd[1438]: cali1f8050e3d07: Gained carrier Nov 23 23:11:24.622710 kubelet[2670]: E1123 23:11:24.622370 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:24.625002 containerd[1503]: 2025-11-23 23:11:24.481 [INFO][4312] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:24.625002 containerd[1503]: 2025-11-23 23:11:24.522 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--zpgkw-eth0 goldmane-666569f655- calico-system b221c963-4636-4d56-a9f8-962285b56868 822 0 2025-11-23 23:11:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-zpgkw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f8050e3d07 [] [] }} ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-" Nov 23 23:11:24.625002 containerd[1503]: 2025-11-23 23:11:24.523 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.625002 containerd[1503]: 2025-11-23 23:11:24.556 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" HandleID="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Workload="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.556 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" HandleID="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Workload="localhost-k8s-goldmane--666569f655--zpgkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-zpgkw", "timestamp":"2025-11-23 23:11:24.556164808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.556 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.556 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.556 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.566 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" host="localhost" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.572 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.576 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.579 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.581 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.625824 containerd[1503]: 2025-11-23 23:11:24.581 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" host="localhost" Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.583 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.592 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" host="localhost" Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.598 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" host="localhost" Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.598 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" host="localhost" Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.598 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:24.628174 containerd[1503]: 2025-11-23 23:11:24.599 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" HandleID="k8s-pod-network.ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Workload="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.628294 containerd[1503]: 2025-11-23 23:11:24.603 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--zpgkw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b221c963-4636-4d56-a9f8-962285b56868", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-zpgkw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f8050e3d07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.628294 containerd[1503]: 2025-11-23 23:11:24.604 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.628376 containerd[1503]: 2025-11-23 23:11:24.604 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8050e3d07 ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.628376 containerd[1503]: 2025-11-23 23:11:24.606 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.628419 containerd[1503]: 2025-11-23 23:11:24.606 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--zpgkw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b221c963-4636-4d56-a9f8-962285b56868", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb", Pod:"goldmane-666569f655-zpgkw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f8050e3d07", MAC:"12:b0:81:22:46:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.628466 containerd[1503]: 2025-11-23 23:11:24.619 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" Namespace="calico-system" Pod="goldmane-666569f655-zpgkw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--zpgkw-eth0" Nov 23 23:11:24.655407 containerd[1503]: time="2025-11-23T23:11:24.655346660Z" level=info msg="connecting to shim ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb" address="unix:///run/containerd/s/1838f78f7e0b5b462f06d7eaddd2b7e01d263ec5bd88c3df5790187a0813c83f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:24.704181 systemd[1]: Started cri-containerd-ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb.scope - libcontainer container ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb. Nov 23 23:11:24.735093 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:24.771382 systemd-networkd[1438]: calif86b817204a: Link UP Nov 23 23:11:24.772264 systemd-networkd[1438]: calif86b817204a: Gained carrier Nov 23 23:11:24.784859 containerd[1503]: time="2025-11-23T23:11:24.784793104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zpgkw,Uid:b221c963-4636-4d56-a9f8-962285b56868,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee064609fe2c594ae364de28389f4fc946a6f9aaee16533698f82729524ef8eb\"" Nov 23 23:11:24.788925 containerd[1503]: time="2025-11-23T23:11:24.788796079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:11:24.798842 containerd[1503]: 2025-11-23 23:11:24.492 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:24.798842 containerd[1503]: 2025-11-23 23:11:24.523 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0 coredns-674b8bbfcf- kube-system 7a313c8b-7ee3-4600-9a4c-1ba94b048ba2 819 0 2025-11-23 23:10:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7jgpd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif86b817204a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-" Nov 23 23:11:24.798842 containerd[1503]: 2025-11-23 23:11:24.523 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.798842 containerd[1503]: 2025-11-23 23:11:24.561 [INFO][4355] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" HandleID="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Workload="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.561 [INFO][4355] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" HandleID="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Workload="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7jgpd", "timestamp":"2025-11-23 23:11:24.561630496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.561 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.598 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.599 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.676 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" host="localhost" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.689 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.696 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.701 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.709 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.799327 containerd[1503]: 2025-11-23 23:11:24.709 [INFO][4355] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" host="localhost" Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.713 [INFO][4355] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.730 [INFO][4355] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" host="localhost" Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4355] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" host="localhost" Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" host="localhost" Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:24.799538 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4355] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" HandleID="k8s-pod-network.c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Workload="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.799652 containerd[1503]: 2025-11-23 23:11:24.756 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a313c8b-7ee3-4600-9a4c-1ba94b048ba2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7jgpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86b817204a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.799709 containerd[1503]: 2025-11-23 23:11:24.756 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.799709 containerd[1503]: 2025-11-23 23:11:24.756 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif86b817204a ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.799709 containerd[1503]: 2025-11-23 23:11:24.773 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.799766 containerd[1503]: 2025-11-23 23:11:24.773 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a313c8b-7ee3-4600-9a4c-1ba94b048ba2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab", Pod:"coredns-674b8bbfcf-7jgpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86b817204a", MAC:"3a:d8:09:fd:40:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.799766 containerd[1503]: 2025-11-23 23:11:24.790 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-7jgpd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7jgpd-eth0" Nov 23 23:11:24.842922 containerd[1503]: time="2025-11-23T23:11:24.842610784Z" level=info msg="connecting to shim c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab" address="unix:///run/containerd/s/b015ade16c7d6e9a9ec4e8692ce748cc5edb7792e5e8a27d27fd35e11efc33c6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:24.855282 systemd-networkd[1438]: cali107d2d7d4a7: Link UP Nov 23 23:11:24.856151 systemd-networkd[1438]: cali107d2d7d4a7: Gained carrier Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.491 [INFO][4321] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.523 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0 coredns-674b8bbfcf- kube-system 5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f 817 0 2025-11-23 23:10:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hs4gj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali107d2d7d4a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.523 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.565 [INFO][4357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" HandleID="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Workload="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.566 [INFO][4357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" HandleID="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Workload="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000365910), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hs4gj", "timestamp":"2025-11-23 23:11:24.565578909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.566 [INFO][4357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.751 [INFO][4357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.772 [INFO][4357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.787 [INFO][4357] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.808 [INFO][4357] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.812 [INFO][4357] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.820 [INFO][4357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.820 [INFO][4357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.823 [INFO][4357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.829 [INFO][4357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.839 [INFO][4357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.839 [INFO][4357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" host="localhost" Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.839 [INFO][4357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:24.881062 containerd[1503]: 2025-11-23 23:11:24.840 [INFO][4357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" HandleID="k8s-pod-network.5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Workload="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.847 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hs4gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107d2d7d4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.848 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.848 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali107d2d7d4a7 ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.856 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.857 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c", Pod:"coredns-674b8bbfcf-hs4gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107d2d7d4a7", MAC:"6a:94:90:24:9b:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:24.881767 containerd[1503]: 2025-11-23 23:11:24.878 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" Namespace="kube-system" Pod="coredns-674b8bbfcf-hs4gj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hs4gj-eth0" Nov 23 23:11:24.882163 systemd[1]: Started cri-containerd-c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab.scope - libcontainer container c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab. Nov 23 23:11:24.894397 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:24.931721 containerd[1503]: time="2025-11-23T23:11:24.931677439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7jgpd,Uid:7a313c8b-7ee3-4600-9a4c-1ba94b048ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab\"" Nov 23 23:11:24.932514 kubelet[2670]: E1123 23:11:24.932489 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:24.948196 containerd[1503]: time="2025-11-23T23:11:24.948104065Z" level=info msg="CreateContainer within sandbox \"c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:11:24.950585 containerd[1503]: time="2025-11-23T23:11:24.950537042Z" level=info msg="connecting to shim 5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c" address="unix:///run/containerd/s/e43fedc230c50df557155d5adc13677b24d04407a5b3d1484ad4661970ecb71b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:24.963961 containerd[1503]: time="2025-11-23T23:11:24.963592989Z" level=info msg="Container 71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:24.975349 containerd[1503]: time="2025-11-23T23:11:24.975287104Z" level=info msg="CreateContainer within sandbox \"c74991a63e67550a6736d7f87d8aafbc74d087989985944cd582ba539b7da3ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802\"" Nov 23 23:11:24.976127 containerd[1503]: time="2025-11-23T23:11:24.976095843Z" level=info msg="StartContainer for \"71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802\"" Nov 23 23:11:24.978475 containerd[1503]: time="2025-11-23T23:11:24.978423658Z" level=info msg="connecting to shim 71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802" address="unix:///run/containerd/s/b015ade16c7d6e9a9ec4e8692ce748cc5edb7792e5e8a27d27fd35e11efc33c6" protocol=ttrpc version=3 Nov 23 23:11:24.986090 containerd[1503]: time="2025-11-23T23:11:24.986032237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:24.987374 containerd[1503]: time="2025-11-23T23:11:24.987321027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:11:24.987458 containerd[1503]: time="2025-11-23T23:11:24.987445990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:24.987703 kubelet[2670]: E1123 23:11:24.987633 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:11:24.987703 kubelet[2670]: E1123 23:11:24.987697 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:11:24.987921 kubelet[2670]: E1123 23:11:24.987847 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rw9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zpgkw_calico-system(b221c963-4636-4d56-a9f8-962285b56868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:24.988186 systemd[1]: Started cri-containerd-5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c.scope - libcontainer container 5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c. Nov 23 23:11:24.990029 kubelet[2670]: E1123 23:11:24.989973 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:24.996057 systemd[1]: Started cri-containerd-71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802.scope - libcontainer container 71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802. Nov 23 23:11:25.005952 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:25.036136 containerd[1503]: time="2025-11-23T23:11:25.036012910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hs4gj,Uid:5a2a75ff-afb7-4607-a3ed-9e9e8f13a46f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c\"" Nov 23 23:11:25.036470 containerd[1503]: time="2025-11-23T23:11:25.036115833Z" level=info msg="StartContainer for \"71eaeb33ea565bd42c067f60ab3846902209a23fd8f8f45e0ee063c9d3627802\" returns successfully" Nov 23 23:11:25.038961 kubelet[2670]: E1123 23:11:25.038606 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:25.044310 containerd[1503]: time="2025-11-23T23:11:25.044266979Z" level=info msg="CreateContainer within sandbox \"5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:11:25.064620 containerd[1503]: time="2025-11-23T23:11:25.064556324Z" level=info msg="Container 1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:11:25.074461 containerd[1503]: time="2025-11-23T23:11:25.074325508Z" level=info msg="CreateContainer within sandbox \"5ad9500456cc3a0628dd1c1ab301e4b3c48bf8bcb91fcbf67b6e7e71613b5e4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147\"" Nov 23 23:11:25.075103 containerd[1503]: time="2025-11-23T23:11:25.075063324Z" level=info msg="StartContainer for \"1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147\"" Nov 23 23:11:25.076243 containerd[1503]: time="2025-11-23T23:11:25.076005706Z" level=info msg="connecting to shim 1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147" address="unix:///run/containerd/s/e43fedc230c50df557155d5adc13677b24d04407a5b3d1484ad4661970ecb71b" protocol=ttrpc version=3 Nov 23 23:11:25.096140 systemd[1]: Started cri-containerd-1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147.scope - libcontainer container 1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147. Nov 23 23:11:25.131845 containerd[1503]: time="2025-11-23T23:11:25.131759222Z" level=info msg="StartContainer for \"1f72bdf16bfd2f93d2e6786672c1cf9291d64df3a978cd629014247425dc0147\" returns successfully" Nov 23 23:11:25.428932 containerd[1503]: time="2025-11-23T23:11:25.428879024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6945d6f6-zn6lq,Uid:0acb505e-a17b-4491-947a-c19d317242d7,Namespace:calico-system,Attempt:0,}" Nov 23 23:11:25.429074 containerd[1503]: time="2025-11-23T23:11:25.428894425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-clgpg,Uid:412b646c-6eab-4135-aded-f9c2d582e297,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:11:25.551126 systemd-networkd[1438]: cali774ec094667: Gained IPv6LL Nov 23 23:11:25.572682 systemd-networkd[1438]: cali28b1876f4cb: Link UP Nov 23 23:11:25.573350 systemd-networkd[1438]: cali28b1876f4cb: Gained carrier Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.460 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.479 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0 calico-kube-controllers-5f6945d6f6- calico-system 0acb505e-a17b-4491-947a-c19d317242d7 813 0 2025-11-23 23:11:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f6945d6f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f6945d6f6-zn6lq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali28b1876f4cb [] [] }} ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.479 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.522 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" HandleID="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Workload="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.522 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" HandleID="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Workload="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f6945d6f6-zn6lq", "timestamp":"2025-11-23 23:11:25.522479647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.522 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.522 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.523 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.532 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.537 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.542 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.545 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.548 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.548 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.550 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3 Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.555 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" host="localhost" Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:25.590347 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" HandleID="k8s-pod-network.3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Workload="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.568 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0", GenerateName:"calico-kube-controllers-5f6945d6f6-", Namespace:"calico-system", SelfLink:"", UID:"0acb505e-a17b-4491-947a-c19d317242d7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6945d6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f6945d6f6-zn6lq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28b1876f4cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.568 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.568 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b1876f4cb ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.570 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.574 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0", GenerateName:"calico-kube-controllers-5f6945d6f6-", Namespace:"calico-system", SelfLink:"", UID:"0acb505e-a17b-4491-947a-c19d317242d7", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6945d6f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3", Pod:"calico-kube-controllers-5f6945d6f6-zn6lq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28b1876f4cb", MAC:"ee:16:2b:4a:c0:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:25.591173 containerd[1503]: 2025-11-23 23:11:25.588 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" Namespace="calico-system" Pod="calico-kube-controllers-5f6945d6f6-zn6lq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6945d6f6--zn6lq-eth0" Nov 23 23:11:25.617969 containerd[1503]: time="2025-11-23T23:11:25.617425541Z" level=info msg="connecting to shim 3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3" address="unix:///run/containerd/s/dd11d4eaa922f03336d374f996c94ac4fc3e6a6013d9dc6264772ee8761e66f9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:25.627895 kubelet[2670]: E1123 23:11:25.627861 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:25.632551 kubelet[2670]: E1123 23:11:25.631480 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:25.634550 kubelet[2670]: E1123 23:11:25.634503 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:25.637925 kubelet[2670]: E1123 23:11:25.635670 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:25.655844 kubelet[2670]: I1123 23:11:25.655789 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7jgpd" podStartSLOduration=38.655139204 podStartE2EDuration="38.655139204s" podCreationTimestamp="2025-11-23 23:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:11:25.651658085 +0000 UTC m=+45.333763064" watchObservedRunningTime="2025-11-23 23:11:25.655139204 +0000 UTC m=+45.337244183" Nov 23 23:11:25.672777 kubelet[2670]: I1123 23:11:25.672685 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hs4gj" podStartSLOduration=38.672666165 podStartE2EDuration="38.672666165s" podCreationTimestamp="2025-11-23 23:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:11:25.671210572 +0000 UTC m=+45.353315591" watchObservedRunningTime="2025-11-23 23:11:25.672666165 +0000 UTC m=+45.354771104" Nov 23 23:11:25.676713 systemd[1]: Started cri-containerd-3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3.scope - libcontainer container 3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3. Nov 23 23:11:25.679118 systemd-networkd[1438]: cali1f8050e3d07: Gained IPv6LL Nov 23 23:11:25.715785 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:25.725265 systemd-networkd[1438]: cali836f9a7edf8: Link UP Nov 23 23:11:25.725985 systemd-networkd[1438]: cali836f9a7edf8: Gained carrier Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.464 [INFO][4633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.486 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0 calico-apiserver-8f88d7d4b- calico-apiserver 412b646c-6eab-4135-aded-f9c2d582e297 818 0 2025-11-23 23:10:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f88d7d4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f88d7d4b-clgpg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali836f9a7edf8 [] [] }} ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.487 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.527 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" HandleID="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.527 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" HandleID="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1410), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f88d7d4b-clgpg", "timestamp":"2025-11-23 23:11:25.527142234 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.527 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.564 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.638 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.657 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.671 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.686 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.695 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.695 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.697 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6 Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.705 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.715 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.715 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" host="localhost" Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.715 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:25.748712 containerd[1503]: 2025-11-23 23:11:25.715 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" HandleID="k8s-pod-network.ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.720 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0", GenerateName:"calico-apiserver-8f88d7d4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"412b646c-6eab-4135-aded-f9c2d582e297", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f88d7d4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f88d7d4b-clgpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali836f9a7edf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.720 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.720 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali836f9a7edf8 ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.726 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.727 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0", GenerateName:"calico-apiserver-8f88d7d4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"412b646c-6eab-4135-aded-f9c2d582e297", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f88d7d4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6", Pod:"calico-apiserver-8f88d7d4b-clgpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali836f9a7edf8", MAC:"ee:9e:ac:6b:37:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:25.749316 containerd[1503]: 2025-11-23 23:11:25.743 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-clgpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--clgpg-eth0" Nov 23 23:11:25.777923 containerd[1503]: time="2025-11-23T23:11:25.777839813Z" level=info msg="connecting to shim ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6" address="unix:///run/containerd/s/1bcffa7a1772257898442d5219e923757f2662878aabccbe9147dfd11aa1bbe6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:25.792100 containerd[1503]: time="2025-11-23T23:11:25.792038698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6945d6f6-zn6lq,Uid:0acb505e-a17b-4491-947a-c19d317242d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c7794765fa203c40d9f28f5f9356972be5b852219d9414ad8de88ef5a53b4b3\"" Nov 23 23:11:25.794061 containerd[1503]: time="2025-11-23T23:11:25.794020384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:11:25.825149 systemd[1]: Started cri-containerd-ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6.scope - libcontainer container ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6. Nov 23 23:11:25.842857 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:25.897973 containerd[1503]: time="2025-11-23T23:11:25.897921522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-clgpg,Uid:412b646c-6eab-4135-aded-f9c2d582e297,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ae4cbf2ac1e0e85f2e224d36a10dc19ce206050cc867cc717187877d54ea03b6\"" Nov 23 23:11:26.014305 containerd[1503]: time="2025-11-23T23:11:26.014169135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:26.016311 containerd[1503]: time="2025-11-23T23:11:26.016262742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:11:26.016445 containerd[1503]: time="2025-11-23T23:11:26.016327184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:11:26.016610 kubelet[2670]: E1123 23:11:26.016575 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:11:26.017167 kubelet[2670]: E1123 23:11:26.016976 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:11:26.017429 kubelet[2670]: E1123 23:11:26.017236 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmp9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f6945d6f6-zn6lq_calico-system(0acb505e-a17b-4491-947a-c19d317242d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:26.017698 containerd[1503]: time="2025-11-23T23:11:26.017664373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:11:26.018583 kubelet[2670]: E1123 23:11:26.018535 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:26.191103 systemd-networkd[1438]: calif86b817204a: Gained IPv6LL Nov 23 23:11:26.226620 containerd[1503]: time="2025-11-23T23:11:26.226534829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:26.229744 containerd[1503]: time="2025-11-23T23:11:26.229431253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:11:26.229744 containerd[1503]: time="2025-11-23T23:11:26.229504295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:26.229864 kubelet[2670]: E1123 23:11:26.229710 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:26.229864 kubelet[2670]: E1123 23:11:26.229757 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:26.229975 kubelet[2670]: E1123 23:11:26.229881 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvq7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8f88d7d4b-clgpg_calico-apiserver(412b646c-6eab-4135-aded-f9c2d582e297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:26.231244 kubelet[2670]: E1123 23:11:26.231185 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:26.429663 containerd[1503]: time="2025-11-23T23:11:26.429407430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-sgw76,Uid:ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:11:26.552772 kubelet[2670]: I1123 23:11:26.552610 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:11:26.555424 kubelet[2670]: E1123 23:11:26.555377 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:26.626249 systemd-networkd[1438]: cali7ae379c3a51: Link UP Nov 23 23:11:26.627059 systemd-networkd[1438]: cali7ae379c3a51: Gained carrier Nov 23 23:11:26.644165 kubelet[2670]: E1123 23:11:26.644081 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.466 [INFO][4807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.492 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0 calico-apiserver-8f88d7d4b- calico-apiserver ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf 823 0 2025-11-23 23:10:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f88d7d4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f88d7d4b-sgw76 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7ae379c3a51 [] [] }} ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.492 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.526 [INFO][4822] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" HandleID="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.527 [INFO][4822] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" HandleID="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8f88d7d4b-sgw76", "timestamp":"2025-11-23 23:11:26.526825282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.527 [INFO][4822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.527 [INFO][4822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.527 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.543 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.553 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.569 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.575 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.587 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.587 [INFO][4822] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.598 [INFO][4822] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5 Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.607 [INFO][4822] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.621 [INFO][4822] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.621 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" host="localhost" Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.621 [INFO][4822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:11:26.646780 containerd[1503]: 2025-11-23 23:11:26.621 [INFO][4822] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" HandleID="k8s-pod-network.f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Workload="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.623 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0", GenerateName:"calico-apiserver-8f88d7d4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f88d7d4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f88d7d4b-sgw76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae379c3a51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.624 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.624 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ae379c3a51 ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.626 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.627 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0", GenerateName:"calico-apiserver-8f88d7d4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f88d7d4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5", Pod:"calico-apiserver-8f88d7d4b-sgw76", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7ae379c3a51", MAC:"26:89:be:77:55:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:11:26.650276 containerd[1503]: 2025-11-23 23:11:26.643 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" Namespace="calico-apiserver" Pod="calico-apiserver-8f88d7d4b-sgw76" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f88d7d4b--sgw76-eth0" Nov 23 23:11:26.650489 kubelet[2670]: E1123 23:11:26.650279 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:26.651497 kubelet[2670]: E1123 23:11:26.651431 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:26.653160 kubelet[2670]: E1123 23:11:26.652975 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:26.653296 kubelet[2670]: E1123 23:11:26.653270 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:26.655223 kubelet[2670]: E1123 23:11:26.655155 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:26.694647 containerd[1503]: time="2025-11-23T23:11:26.694403417Z" level=info msg="connecting to shim f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5" address="unix:///run/containerd/s/1369b6d883388049e6030fdee55829587c3643856f1dc2a2bfdec97548916c83" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:11:26.737167 systemd[1]: Started cri-containerd-f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5.scope - libcontainer container f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5. Nov 23 23:11:26.752380 systemd-resolved[1357]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:11:26.794988 containerd[1503]: time="2025-11-23T23:11:26.794944698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f88d7d4b-sgw76,Uid:ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f22efe8e1dd5683bc8ef93decd3f3fa0a20e3784ce6d74ebc25ce001ed9518f5\"" Nov 23 23:11:26.798947 containerd[1503]: time="2025-11-23T23:11:26.798176530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:11:26.832129 systemd-networkd[1438]: cali107d2d7d4a7: Gained IPv6LL Nov 23 23:11:27.015807 containerd[1503]: time="2025-11-23T23:11:27.015605687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:27.019644 containerd[1503]: time="2025-11-23T23:11:27.019339928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:11:27.019644 containerd[1503]: time="2025-11-23T23:11:27.019345328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:27.020055 kubelet[2670]: E1123 23:11:27.020011 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:27.020359 kubelet[2670]: E1123 23:11:27.020070 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:27.020410 kubelet[2670]: E1123 23:11:27.020343 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwkg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8f88d7d4b-sgw76_calico-apiserver(ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:27.022669 kubelet[2670]: E1123 23:11:27.022092 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:27.407060 systemd-networkd[1438]: cali28b1876f4cb: Gained IPv6LL Nov 23 23:11:27.535047 systemd-networkd[1438]: cali836f9a7edf8: Gained IPv6LL Nov 23 23:11:27.597647 systemd-networkd[1438]: vxlan.calico: Link UP Nov 23 23:11:27.597661 systemd-networkd[1438]: vxlan.calico: Gained carrier Nov 23 23:11:27.652324 kubelet[2670]: E1123 23:11:27.652284 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:27.652578 kubelet[2670]: E1123 23:11:27.652541 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:27.653342 kubelet[2670]: E1123 23:11:27.653312 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:27.655286 kubelet[2670]: E1123 23:11:27.655247 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:27.983092 systemd-networkd[1438]: cali7ae379c3a51: Gained IPv6LL Nov 23 23:11:28.370033 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:34832.service - OpenSSH per-connection server daemon (10.0.0.1:34832). Nov 23 23:11:28.437576 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 34832 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:28.439412 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:28.444064 systemd-logind[1483]: New session 9 of user core. Nov 23 23:11:28.457161 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:11:28.658000 kubelet[2670]: E1123 23:11:28.657749 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:28.660066 sshd[5030]: Connection closed by 10.0.0.1 port 34832 Nov 23 23:11:28.661838 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:28.669093 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:34832.service: Deactivated successfully. Nov 23 23:11:28.673154 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:11:28.677681 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:11:28.678833 systemd-logind[1483]: Removed session 9. Nov 23 23:11:29.455164 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Nov 23 23:11:31.431797 containerd[1503]: time="2025-11-23T23:11:31.431656579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:11:31.678485 containerd[1503]: time="2025-11-23T23:11:31.678424760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:31.680332 containerd[1503]: time="2025-11-23T23:11:31.680269196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:11:31.680471 containerd[1503]: time="2025-11-23T23:11:31.680371318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:11:31.680689 kubelet[2670]: E1123 23:11:31.680642 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:31.681137 kubelet[2670]: E1123 23:11:31.680717 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:31.681137 kubelet[2670]: E1123 23:11:31.680929 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9542fe333b649a49bebbed2ee2383fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:31.683103 containerd[1503]: time="2025-11-23T23:11:31.682981529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:11:31.892537 containerd[1503]: time="2025-11-23T23:11:31.892479822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:31.900830 containerd[1503]: time="2025-11-23T23:11:31.900739423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:11:31.900959 containerd[1503]: time="2025-11-23T23:11:31.900838145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:11:31.901114 kubelet[2670]: E1123 23:11:31.901046 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:31.901175 kubelet[2670]: E1123 23:11:31.901110 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:31.901325 kubelet[2670]: E1123 23:11:31.901264 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:31.902535 kubelet[2670]: E1123 23:11:31.902466 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8464bc68f8-w7h5q" podUID="03841c62-c516-4f22-ae1c-acb3dc1c42a5" Nov 23 23:11:33.675597 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:40632.service - OpenSSH per-connection server daemon (10.0.0.1:40632). Nov 23 23:11:33.744635 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 40632 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:33.746246 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:33.751007 systemd-logind[1483]: New session 10 of user core. Nov 23 23:11:33.762126 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:11:33.950066 sshd[5058]: Connection closed by 10.0.0.1 port 40632 Nov 23 23:11:33.951493 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:33.959770 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:40632.service: Deactivated successfully. Nov 23 23:11:33.962016 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:11:33.963041 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:11:33.967042 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:40646.service - OpenSSH per-connection server daemon (10.0.0.1:40646). Nov 23 23:11:33.967893 systemd-logind[1483]: Removed session 10. Nov 23 23:11:34.028374 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 40646 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:34.030075 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:34.036008 systemd-logind[1483]: New session 11 of user core. Nov 23 23:11:34.046129 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:11:34.282821 sshd[5075]: Connection closed by 10.0.0.1 port 40646 Nov 23 23:11:34.283267 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:34.293774 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:40646.service: Deactivated successfully. Nov 23 23:11:34.297607 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:11:34.298700 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:11:34.302830 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:40650.service - OpenSSH per-connection server daemon (10.0.0.1:40650). Nov 23 23:11:34.304754 systemd-logind[1483]: Removed session 11. Nov 23 23:11:34.370404 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 40650 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:34.374779 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:34.383043 systemd-logind[1483]: New session 12 of user core. Nov 23 23:11:34.396113 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:11:34.579095 sshd[5094]: Connection closed by 10.0.0.1 port 40650 Nov 23 23:11:34.579403 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:34.584510 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:40650.service: Deactivated successfully. Nov 23 23:11:34.588835 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:11:34.589749 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:11:34.590855 systemd-logind[1483]: Removed session 12. Nov 23 23:11:39.433258 containerd[1503]: time="2025-11-23T23:11:39.433134235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:11:39.599606 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:49118.service - OpenSSH per-connection server daemon (10.0.0.1:49118). Nov 23 23:11:39.644589 containerd[1503]: time="2025-11-23T23:11:39.644489206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:39.647744 containerd[1503]: time="2025-11-23T23:11:39.647288050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:11:39.647744 containerd[1503]: time="2025-11-23T23:11:39.647314571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:11:39.647994 kubelet[2670]: E1123 23:11:39.647560 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:11:39.647994 kubelet[2670]: E1123 23:11:39.647630 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:11:39.647994 kubelet[2670]: E1123 23:11:39.647771 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zpkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:39.650922 containerd[1503]: time="2025-11-23T23:11:39.650729185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:11:39.668962 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 49118 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:39.671714 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:39.679044 systemd-logind[1483]: New session 13 of user core. Nov 23 23:11:39.688147 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:11:39.848959 sshd[5121]: Connection closed by 10.0.0.1 port 49118 Nov 23 23:11:39.850143 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:39.858766 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:49118.service: Deactivated successfully. Nov 23 23:11:39.860859 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:11:39.862016 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:11:39.863255 containerd[1503]: time="2025-11-23T23:11:39.863159293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:39.866561 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:49142.service - OpenSSH per-connection server daemon (10.0.0.1:49142). Nov 23 23:11:39.867457 systemd-logind[1483]: Removed session 13. Nov 23 23:11:39.867912 containerd[1503]: time="2025-11-23T23:11:39.867745926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:11:39.867912 containerd[1503]: time="2025-11-23T23:11:39.867858008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:11:39.868848 kubelet[2670]: E1123 23:11:39.868100 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:11:39.868848 kubelet[2670]: E1123 23:11:39.868158 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:11:39.868848 kubelet[2670]: E1123 23:11:39.868377 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zpkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2tmhj_calico-system(02b80ccd-71ac-4684-b4ef-36bab9efb9cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:39.869688 kubelet[2670]: E1123 23:11:39.869568 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:39.960525 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 49142 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:39.962509 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:39.969572 systemd-logind[1483]: New session 14 of user core. Nov 23 23:11:39.986164 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:11:40.235999 sshd[5138]: Connection closed by 10.0.0.1 port 49142 Nov 23 23:11:40.236328 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:40.248267 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:49142.service: Deactivated successfully. Nov 23 23:11:40.253007 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:11:40.254876 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:11:40.257586 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:49174.service - OpenSSH per-connection server daemon (10.0.0.1:49174). Nov 23 23:11:40.259183 systemd-logind[1483]: Removed session 14. Nov 23 23:11:40.321189 sshd[5150]: Accepted publickey for core from 10.0.0.1 port 49174 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:40.323374 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:40.327861 systemd-logind[1483]: New session 15 of user core. Nov 23 23:11:40.341150 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:11:40.433737 containerd[1503]: time="2025-11-23T23:11:40.433664624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:11:40.642839 containerd[1503]: time="2025-11-23T23:11:40.642772918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:40.643790 containerd[1503]: time="2025-11-23T23:11:40.643746693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:11:40.643885 containerd[1503]: time="2025-11-23T23:11:40.643821975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:11:40.644115 kubelet[2670]: E1123 23:11:40.644077 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:11:40.644204 kubelet[2670]: E1123 23:11:40.644127 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:11:40.644324 kubelet[2670]: E1123 23:11:40.644269 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmp9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f6945d6f6-zn6lq_calico-system(0acb505e-a17b-4491-947a-c19d317242d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:40.645501 kubelet[2670]: E1123 23:11:40.645455 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:41.031139 sshd[5153]: Connection closed by 10.0.0.1 port 49174 Nov 23 23:11:41.029834 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:41.040242 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:49174.service: Deactivated successfully. Nov 23 23:11:41.043282 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:11:41.046102 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:11:41.053084 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:49214.service - OpenSSH per-connection server daemon (10.0.0.1:49214). Nov 23 23:11:41.054872 systemd-logind[1483]: Removed session 15. Nov 23 23:11:41.128539 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 49214 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:41.130287 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:41.135497 systemd-logind[1483]: New session 16 of user core. Nov 23 23:11:41.145184 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:11:41.440708 containerd[1503]: time="2025-11-23T23:11:41.440408086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:11:41.523752 sshd[5179]: Connection closed by 10.0.0.1 port 49214 Nov 23 23:11:41.525592 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:41.538821 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:49214.service: Deactivated successfully. Nov 23 23:11:41.541668 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:11:41.544200 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:11:41.549521 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). Nov 23 23:11:41.550761 systemd-logind[1483]: Removed session 16. Nov 23 23:11:41.617315 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:41.618764 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:41.623833 systemd-logind[1483]: New session 17 of user core. Nov 23 23:11:41.630128 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:11:41.664290 containerd[1503]: time="2025-11-23T23:11:41.664244165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:41.665293 containerd[1503]: time="2025-11-23T23:11:41.665253060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:11:41.665366 containerd[1503]: time="2025-11-23T23:11:41.665356701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:41.665812 kubelet[2670]: E1123 23:11:41.665640 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:41.666326 kubelet[2670]: E1123 23:11:41.665831 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:41.667163 containerd[1503]: time="2025-11-23T23:11:41.667085208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:11:41.668037 kubelet[2670]: E1123 23:11:41.667975 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwkg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8f88d7d4b-sgw76_calico-apiserver(ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:41.669244 kubelet[2670]: E1123 23:11:41.669206 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:41.800087 sshd[5193]: Connection closed by 10.0.0.1 port 49240 Nov 23 23:11:41.800435 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:41.805350 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:49240.service: Deactivated successfully. Nov 23 23:11:41.810235 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:11:41.811395 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:11:41.812776 systemd-logind[1483]: Removed session 17. Nov 23 23:11:41.874291 containerd[1503]: time="2025-11-23T23:11:41.874111311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:41.893142 containerd[1503]: time="2025-11-23T23:11:41.893076239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:11:41.893377 containerd[1503]: time="2025-11-23T23:11:41.893094720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:41.893520 kubelet[2670]: E1123 23:11:41.893477 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:11:41.893807 kubelet[2670]: E1123 23:11:41.893634 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:11:41.894120 kubelet[2670]: E1123 23:11:41.893929 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rw9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zpgkw_calico-system(b221c963-4636-4d56-a9f8-962285b56868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:41.894624 containerd[1503]: time="2025-11-23T23:11:41.894446060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:11:41.895588 kubelet[2670]: E1123 23:11:41.895464 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:42.114397 containerd[1503]: time="2025-11-23T23:11:42.114333759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:42.115600 containerd[1503]: time="2025-11-23T23:11:42.115496056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:11:42.115600 containerd[1503]: time="2025-11-23T23:11:42.115561257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:11:42.115807 kubelet[2670]: E1123 23:11:42.115748 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:42.115852 kubelet[2670]: E1123 23:11:42.115819 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:11:42.116411 kubelet[2670]: E1123 23:11:42.116134 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvq7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8f88d7d4b-clgpg_calico-apiserver(412b646c-6eab-4135-aded-f9c2d582e297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:42.118021 kubelet[2670]: E1123 23:11:42.117927 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:43.433807 kubelet[2670]: E1123 23:11:43.433704 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8464bc68f8-w7h5q" podUID="03841c62-c516-4f22-ae1c-acb3dc1c42a5" Nov 23 23:11:46.709204 kubelet[2670]: E1123 23:11:46.709149 2670 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 23 23:11:46.821647 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:49256.service - OpenSSH per-connection server daemon (10.0.0.1:49256). Nov 23 23:11:46.909431 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 49256 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:46.911252 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:46.928782 systemd-logind[1483]: New session 18 of user core. Nov 23 23:11:46.938168 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:11:47.112872 sshd[5236]: Connection closed by 10.0.0.1 port 49256 Nov 23 23:11:47.113253 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:47.117476 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:49256.service: Deactivated successfully. Nov 23 23:11:47.119574 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:11:47.120765 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:11:47.122471 systemd-logind[1483]: Removed session 18. Nov 23 23:11:52.135134 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:37024.service - OpenSSH per-connection server daemon (10.0.0.1:37024). Nov 23 23:11:52.217134 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 37024 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:52.218720 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:52.226437 systemd-logind[1483]: New session 19 of user core. Nov 23 23:11:52.244199 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:11:52.383337 sshd[5262]: Connection closed by 10.0.0.1 port 37024 Nov 23 23:11:52.384053 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:52.389841 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:37024.service: Deactivated successfully. Nov 23 23:11:52.391830 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:11:52.393684 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:11:52.396335 systemd-logind[1483]: Removed session 19. Nov 23 23:11:52.433930 kubelet[2670]: E1123 23:11:52.433782 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zpgkw" podUID="b221c963-4636-4d56-a9f8-962285b56868" Nov 23 23:11:52.437271 kubelet[2670]: E1123 23:11:52.436676 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2tmhj" podUID="02b80ccd-71ac-4684-b4ef-36bab9efb9cc" Nov 23 23:11:53.428877 kubelet[2670]: E1123 23:11:53.428710 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f6945d6f6-zn6lq" podUID="0acb505e-a17b-4491-947a-c19d317242d7" Nov 23 23:11:54.434091 containerd[1503]: time="2025-11-23T23:11:54.434046925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:11:54.635765 containerd[1503]: time="2025-11-23T23:11:54.635715833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:54.643314 containerd[1503]: time="2025-11-23T23:11:54.643237598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:11:54.644608 containerd[1503]: time="2025-11-23T23:11:54.643314879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:11:54.644647 kubelet[2670]: E1123 23:11:54.643534 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:54.644647 kubelet[2670]: E1123 23:11:54.643578 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:11:54.644647 kubelet[2670]: E1123 23:11:54.643747 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c9542fe333b649a49bebbed2ee2383fa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:54.646947 containerd[1503]: time="2025-11-23T23:11:54.646696557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:11:54.839374 containerd[1503]: time="2025-11-23T23:11:54.839293883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:11:54.840335 containerd[1503]: time="2025-11-23T23:11:54.840274574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:11:54.840393 containerd[1503]: time="2025-11-23T23:11:54.840344295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:11:54.840582 kubelet[2670]: E1123 23:11:54.840530 2670 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:54.840662 kubelet[2670]: E1123 23:11:54.840594 2670 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:11:54.840918 kubelet[2670]: E1123 23:11:54.840727 2670 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rttwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8464bc68f8-w7h5q_calico-system(03841c62-c516-4f22-ae1c-acb3dc1c42a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:11:54.842335 kubelet[2670]: E1123 23:11:54.842192 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8464bc68f8-w7h5q" podUID="03841c62-c516-4f22-ae1c-acb3dc1c42a5" Nov 23 23:11:55.430067 kubelet[2670]: E1123 23:11:55.430016 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-sgw76" podUID="ec8bcf17-8d1d-4b90-9b92-408df6d5c1bf" Nov 23 23:11:57.405773 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:37040.service - OpenSSH per-connection server daemon (10.0.0.1:37040). Nov 23 23:11:57.429967 kubelet[2670]: E1123 23:11:57.429811 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8f88d7d4b-clgpg" podUID="412b646c-6eab-4135-aded-f9c2d582e297" Nov 23 23:11:57.481918 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 37040 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:11:57.482655 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:11:57.487023 systemd-logind[1483]: New session 20 of user core. Nov 23 23:11:57.496295 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:11:57.728928 sshd[5283]: Connection closed by 10.0.0.1 port 37040 Nov 23 23:11:57.729720 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Nov 23 23:11:57.736622 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:11:57.736871 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:37040.service: Deactivated successfully. Nov 23 23:11:57.740685 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:11:57.742179 systemd-logind[1483]: Removed session 20.