Nov 23 23:17:28.767256 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:17:28.767278 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:17:28.767287 kernel: KASLR enabled Nov 23 23:17:28.767293 kernel: efi: EFI v2.7 by EDK II Nov 23 23:17:28.767298 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Nov 23 23:17:28.767304 kernel: random: crng init done Nov 23 23:17:28.767310 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Nov 23 23:17:28.767316 kernel: secureboot: Secure boot enabled Nov 23 23:17:28.767322 kernel: ACPI: Early table checksum verification disabled Nov 23 23:17:28.767329 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Nov 23 23:17:28.767335 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:17:28.767341 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767347 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767353 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767360 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767367 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767373 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767379 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767385 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767392 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:17:28.767398 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 23:17:28.767404 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:17:28.767410 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:17:28.767416 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Nov 23 23:17:28.767422 kernel: Zone ranges: Nov 23 23:17:28.767429 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:17:28.767435 kernel: DMA32 empty Nov 23 23:17:28.767441 kernel: Normal empty Nov 23 23:17:28.767447 kernel: Device empty Nov 23 23:17:28.767453 kernel: Movable zone start for each node Nov 23 23:17:28.767458 kernel: Early memory node ranges Nov 23 23:17:28.767464 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Nov 23 23:17:28.767470 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Nov 23 23:17:28.767476 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Nov 23 23:17:28.767482 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Nov 23 23:17:28.767488 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Nov 23 23:17:28.767494 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Nov 23 23:17:28.767501 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Nov 23 23:17:28.767514 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Nov 23 23:17:28.767521 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 23:17:28.767529 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:17:28.767536 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 23:17:28.767542 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Nov 23 23:17:28.767549 kernel: psci: probing for conduit method from ACPI. Nov 23 23:17:28.767556 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:17:28.767563 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:17:28.767569 kernel: psci: Trusted OS migration not required Nov 23 23:17:28.767576 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:17:28.767583 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:17:28.767589 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:17:28.767596 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:17:28.767603 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 23:17:28.767610 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:17:28.767617 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:17:28.767624 kernel: CPU features: detected: Spectre-v4 Nov 23 23:17:28.767630 kernel: CPU features: detected: Spectre-BHB Nov 23 23:17:28.767637 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:17:28.767643 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:17:28.767650 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:17:28.767656 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:17:28.767663 kernel: alternatives: applying boot alternatives Nov 23 23:17:28.767670 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:17:28.767677 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:17:28.767684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:17:28.767692 kernel: Fallback order for Node 0: 0 Nov 23 23:17:28.767698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 23:17:28.767704 kernel: Policy zone: DMA Nov 23 23:17:28.767710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:17:28.767717 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 23:17:28.767723 kernel: software IO TLB: area num 4. Nov 23 23:17:28.767729 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 23:17:28.767736 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Nov 23 23:17:28.767742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 23:17:28.767749 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:17:28.767756 kernel: rcu: RCU event tracing is enabled. Nov 23 23:17:28.767762 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 23:17:28.767770 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:17:28.767777 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:17:28.767783 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:17:28.767789 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 23:17:28.767796 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:17:28.767802 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:17:28.767809 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:17:28.767815 kernel: GICv3: 256 SPIs implemented Nov 23 23:17:28.767821 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:17:28.767828 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:17:28.767834 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:17:28.767840 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:17:28.767848 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:17:28.767854 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:17:28.767860 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:17:28.767867 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:17:28.767873 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 23:17:28.767880 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 23:17:28.767886 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:17:28.767892 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:17:28.767898 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:17:28.767905 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:17:28.767911 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:17:28.767919 kernel: arm-pv: using stolen time PV Nov 23 23:17:28.767925 kernel: Console: colour dummy device 80x25 Nov 23 23:17:28.767932 kernel: ACPI: Core revision 20240827 Nov 23 23:17:28.767939 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:17:28.767945 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:17:28.767952 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:17:28.767958 kernel: landlock: Up and running. Nov 23 23:17:28.767965 kernel: SELinux: Initializing. Nov 23 23:17:28.767971 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:17:28.767979 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:17:28.767986 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:17:28.767992 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:17:28.767999 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:17:28.768016 kernel: Remapping and enabling EFI services. Nov 23 23:17:28.768022 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:17:28.768029 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:17:28.768035 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:17:28.768042 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 23:17:28.768050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:17:28.768061 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:17:28.768068 kernel: Detected PIPT I-cache on CPU2 Nov 23 23:17:28.768076 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 23:17:28.768083 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 23:17:28.768090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:17:28.768097 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 23:17:28.768104 kernel: Detected PIPT I-cache on CPU3 Nov 23 23:17:28.768112 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 23:17:28.768125 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 23:17:28.768132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:17:28.768139 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 23:17:28.768145 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 23:17:28.768153 kernel: SMP: Total of 4 processors activated. Nov 23 23:17:28.768159 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:17:28.768166 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:17:28.768174 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:17:28.768181 kernel: CPU features: detected: Common not Private translations Nov 23 23:17:28.768189 kernel: CPU features: detected: CRC32 instructions Nov 23 23:17:28.768197 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:17:28.768203 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:17:28.768211 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:17:28.768218 kernel: CPU features: detected: Privileged Access Never Nov 23 23:17:28.768296 kernel: CPU features: detected: RAS Extension Support Nov 23 23:17:28.768308 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:17:28.768316 kernel: alternatives: applying system-wide alternatives Nov 23 23:17:28.768323 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 23:17:28.768334 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Nov 23 23:17:28.768341 kernel: devtmpfs: initialized Nov 23 23:17:28.768348 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:17:28.768355 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 23:17:28.768362 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:17:28.768369 kernel: 0 pages in range for non-PLT usage Nov 23 23:17:28.768376 kernel: 508400 pages in range for PLT usage Nov 23 23:17:28.768383 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:17:28.768390 kernel: SMBIOS 3.0.0 present. Nov 23 23:17:28.768398 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 23:17:28.768405 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:17:28.768412 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:17:28.768418 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:17:28.768426 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:17:28.768433 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:17:28.768440 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:17:28.768447 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 23 23:17:28.768454 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:17:28.768462 kernel: cpuidle: using governor menu Nov 23 23:17:28.768469 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:17:28.768476 kernel: ASID allocator initialised with 32768 entries Nov 23 23:17:28.768482 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:17:28.768489 kernel: Serial: AMBA PL011 UART driver Nov 23 23:17:28.768496 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:17:28.768503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:17:28.768517 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:17:28.768524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:17:28.768532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:17:28.768539 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:17:28.768546 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:17:28.768553 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:17:28.768559 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:17:28.768566 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:17:28.768573 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:17:28.768581 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:17:28.768587 kernel: ACPI: Interpreter enabled Nov 23 23:17:28.768595 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:17:28.768603 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:17:28.768610 kernel: ACPI: CPU0 has been hot-added Nov 23 23:17:28.768616 kernel: ACPI: CPU1 has been hot-added Nov 23 23:17:28.768624 kernel: ACPI: CPU2 has been hot-added Nov 23 23:17:28.768631 kernel: ACPI: CPU3 has been hot-added Nov 23 23:17:28.768638 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:17:28.768645 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:17:28.768652 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:17:28.768808 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:17:28.768874 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:17:28.768933 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:17:28.768991 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:17:28.769149 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:17:28.769162 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:17:28.769170 kernel: PCI host bridge to bus 0000:00 Nov 23 23:17:28.769243 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:17:28.769298 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:17:28.769351 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:17:28.769403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:17:28.769481 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:17:28.769569 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 23:17:28.769639 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 23:17:28.769704 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 23:17:28.769762 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:17:28.769820 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:17:28.769881 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 23:17:28.769939 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 23:17:28.769991 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:17:28.770068 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:17:28.770122 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:17:28.770131 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:17:28.770138 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:17:28.770145 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:17:28.770152 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:17:28.770159 kernel: iommu: Default domain type: Translated Nov 23 23:17:28.770166 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:17:28.770173 kernel: efivars: Registered efivars operations Nov 23 23:17:28.770182 kernel: vgaarb: loaded Nov 23 23:17:28.770189 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:17:28.770196 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:17:28.770203 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:17:28.770210 kernel: pnp: PnP ACPI init Nov 23 23:17:28.770280 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:17:28.770290 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:17:28.770297 kernel: NET: Registered PF_INET protocol family Nov 23 23:17:28.770306 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:17:28.770313 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:17:28.770320 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:17:28.770327 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:17:28.770334 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:17:28.770341 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:17:28.770348 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:17:28.770355 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:17:28.770362 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:17:28.770370 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:17:28.770377 kernel: kvm [1]: HYP mode not available Nov 23 23:17:28.770384 kernel: Initialise system trusted keyrings Nov 23 23:17:28.770391 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:17:28.770397 kernel: Key type asymmetric registered Nov 23 23:17:28.770404 kernel: Asymmetric key parser 'x509' registered Nov 23 23:17:28.770411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:17:28.770418 kernel: io scheduler mq-deadline registered Nov 23 23:17:28.770425 kernel: io scheduler kyber registered Nov 23 23:17:28.770433 kernel: io scheduler bfq registered Nov 23 23:17:28.770440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:17:28.770447 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:17:28.770455 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:17:28.770525 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 23:17:28.770535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:17:28.770542 kernel: thunder_xcv, ver 1.0 Nov 23 23:17:28.770549 kernel: thunder_bgx, ver 1.0 Nov 23 23:17:28.770556 kernel: nicpf, ver 1.0 Nov 23 23:17:28.770565 kernel: nicvf, ver 1.0 Nov 23 23:17:28.770634 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:17:28.770690 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:17:28 UTC (1763939848) Nov 23 23:17:28.770699 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:17:28.770707 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:17:28.770714 kernel: watchdog: NMI not fully supported Nov 23 23:17:28.770721 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:17:28.770728 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:17:28.770737 kernel: Segment Routing with IPv6 Nov 23 23:17:28.770743 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:17:28.770750 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:17:28.770757 kernel: Key type dns_resolver registered Nov 23 23:17:28.770764 kernel: registered taskstats version 1 Nov 23 23:17:28.770771 kernel: Loading compiled-in X.509 certificates Nov 23 23:17:28.770778 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:17:28.770785 kernel: Demotion targets for Node 0: null Nov 23 23:17:28.770791 kernel: Key type .fscrypt registered Nov 23 23:17:28.770799 kernel: Key type fscrypt-provisioning registered Nov 23 23:17:28.770807 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:17:28.770814 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:17:28.770820 kernel: ima: No architecture policies found Nov 23 23:17:28.770828 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:17:28.770835 kernel: clk: Disabling unused clocks Nov 23 23:17:28.770842 kernel: PM: genpd: Disabling unused power domains Nov 23 23:17:28.770849 kernel: Warning: unable to open an initial console. Nov 23 23:17:28.770856 kernel: Freeing unused kernel memory: 39552K Nov 23 23:17:28.770864 kernel: Run /init as init process Nov 23 23:17:28.770871 kernel: with arguments: Nov 23 23:17:28.770878 kernel: /init Nov 23 23:17:28.770884 kernel: with environment: Nov 23 23:17:28.770891 kernel: HOME=/ Nov 23 23:17:28.770898 kernel: TERM=linux Nov 23 23:17:28.770906 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:17:28.770916 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:17:28.770925 systemd[1]: Detected virtualization kvm. Nov 23 23:17:28.770932 systemd[1]: Detected architecture arm64. Nov 23 23:17:28.770939 systemd[1]: Running in initrd. Nov 23 23:17:28.770946 systemd[1]: No hostname configured, using default hostname. Nov 23 23:17:28.770954 systemd[1]: Hostname set to . Nov 23 23:17:28.770961 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:17:28.770968 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:17:28.770976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:17:28.770984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:17:28.770992 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:17:28.771000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:17:28.771024 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:17:28.771032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:17:28.771041 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:17:28.771050 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:17:28.771057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:17:28.771065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:17:28.771072 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:17:28.771080 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:17:28.771087 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:17:28.771095 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:17:28.771102 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:17:28.771109 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:17:28.771118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:17:28.771125 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:17:28.771133 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:17:28.771141 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:17:28.771148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:17:28.771156 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:17:28.771163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:17:28.771170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:17:28.771179 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:17:28.771187 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:17:28.771195 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:17:28.771202 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:17:28.771209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:17:28.771217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:17:28.771225 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:17:28.771234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:17:28.771242 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:17:28.771249 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:17:28.771274 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 23:17:28.771295 systemd-journald[245]: Journal started Nov 23 23:17:28.771312 systemd-journald[245]: Runtime Journal (/run/log/journal/73a9475a970f4ffe834c9f107feccabf) is 6M, max 48.5M, 42.4M free. Nov 23 23:17:28.761467 systemd-modules-load[248]: Inserted module 'overlay' Nov 23 23:17:28.777815 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:17:28.777836 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:17:28.779370 systemd-modules-load[248]: Inserted module 'br_netfilter' Nov 23 23:17:28.780204 kernel: Bridge firewalling registered Nov 23 23:17:28.779974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:17:28.781600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:17:28.783649 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:17:28.788132 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:17:28.789998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:17:28.792203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:17:28.806655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:17:28.816084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:17:28.816591 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:17:28.818232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:17:28.819689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:17:28.823528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:17:28.826026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:17:28.848688 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:17:28.873043 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:17:28.874766 systemd-resolved[289]: Positive Trust Anchors: Nov 23 23:17:28.874776 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:17:28.874808 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:17:28.879922 systemd-resolved[289]: Defaulting to hostname 'linux'. Nov 23 23:17:28.881653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:17:28.888724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:17:28.962033 kernel: SCSI subsystem initialized Nov 23 23:17:28.966023 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:17:28.975042 kernel: iscsi: registered transport (tcp) Nov 23 23:17:28.992366 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:17:28.992386 kernel: QLogic iSCSI HBA Driver Nov 23 23:17:29.010131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:17:29.035073 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:17:29.036698 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:17:29.088019 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:17:29.090059 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:17:29.164046 kernel: raid6: neonx8 gen() 15610 MB/s Nov 23 23:17:29.181053 kernel: raid6: neonx4 gen() 15631 MB/s Nov 23 23:17:29.198031 kernel: raid6: neonx2 gen() 13051 MB/s Nov 23 23:17:29.215030 kernel: raid6: neonx1 gen() 10363 MB/s Nov 23 23:17:29.232028 kernel: raid6: int64x8 gen() 6845 MB/s Nov 23 23:17:29.249025 kernel: raid6: int64x4 gen() 7287 MB/s Nov 23 23:17:29.266026 kernel: raid6: int64x2 gen() 6041 MB/s Nov 23 23:17:29.283027 kernel: raid6: int64x1 gen() 5006 MB/s Nov 23 23:17:29.283042 kernel: raid6: using algorithm neonx4 gen() 15631 MB/s Nov 23 23:17:29.300056 kernel: raid6: .... xor() 12226 MB/s, rmw enabled Nov 23 23:17:29.300111 kernel: raid6: using neon recovery algorithm Nov 23 23:17:29.305498 kernel: xor: measuring software checksum speed Nov 23 23:17:29.305533 kernel: 8regs : 21516 MB/sec Nov 23 23:17:29.306177 kernel: 32regs : 21681 MB/sec Nov 23 23:17:29.307323 kernel: arm64_neon : 28013 MB/sec Nov 23 23:17:29.307338 kernel: xor: using function: arm64_neon (28013 MB/sec) Nov 23 23:17:29.360042 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:17:29.368051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:17:29.372135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:17:29.409122 systemd-udevd[502]: Using default interface naming scheme 'v255'. Nov 23 23:17:29.413282 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:17:29.415285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:17:29.446510 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 23 23:17:29.469627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:17:29.471880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:17:29.521839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:17:29.526583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:17:29.571301 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 23:17:29.571481 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 23:17:29.581657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:17:29.581719 kernel: GPT:9289727 != 19775487 Nov 23 23:17:29.581729 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:17:29.581739 kernel: GPT:9289727 != 19775487 Nov 23 23:17:29.582314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:17:29.585078 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:17:29.585101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:17:29.582437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:17:29.586906 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:17:29.590601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:17:29.616897 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 23:17:29.618309 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:17:29.626708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 23:17:29.628172 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:17:29.640244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 23:17:29.641484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 23:17:29.649880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:17:29.651238 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:17:29.653157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:17:29.655033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:17:29.658785 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:17:29.660733 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:17:29.686942 disk-uuid[595]: Primary Header is updated. Nov 23 23:17:29.686942 disk-uuid[595]: Secondary Entries is updated. Nov 23 23:17:29.686942 disk-uuid[595]: Secondary Header is updated. Nov 23 23:17:29.687892 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:17:29.693030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:17:29.696054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:17:30.700030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:17:30.701383 disk-uuid[600]: The operation has completed successfully. Nov 23 23:17:30.725699 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:17:30.725796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:17:30.754708 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:17:30.783095 sh[614]: Success Nov 23 23:17:30.796892 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:17:30.796953 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:17:30.796964 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:17:30.804031 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:17:30.831117 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:17:30.833957 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:17:30.846228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:17:30.854914 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Nov 23 23:17:30.854955 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:17:30.854975 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:17:30.860030 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:17:30.860095 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:17:30.860952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:17:30.862243 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:17:30.863563 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:17:30.864394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:17:30.865900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:17:30.890035 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Nov 23 23:17:30.892671 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:17:30.892727 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:17:30.896033 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:17:30.896082 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:17:30.901150 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:17:30.904038 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:17:30.906156 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:17:30.977282 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:17:30.981250 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:17:31.022200 ignition[708]: Ignition 2.22.0 Nov 23 23:17:31.022217 ignition[708]: Stage: fetch-offline Nov 23 23:17:31.022247 ignition[708]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:31.022255 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:31.022334 ignition[708]: parsed url from cmdline: "" Nov 23 23:17:31.022337 ignition[708]: no config URL provided Nov 23 23:17:31.022342 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:17:31.022348 ignition[708]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:17:31.027579 systemd-networkd[808]: lo: Link UP Nov 23 23:17:31.022369 ignition[708]: op(1): [started] loading QEMU firmware config module Nov 23 23:17:31.027584 systemd-networkd[808]: lo: Gained carrier Nov 23 23:17:31.022374 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 23:17:31.028346 systemd-networkd[808]: Enumeration completed Nov 23 23:17:31.029043 ignition[708]: op(1): [finished] loading QEMU firmware config module Nov 23 23:17:31.028699 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:17:31.029064 ignition[708]: QEMU firmware config was not found. Ignoring... Nov 23 23:17:31.028806 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:17:31.028810 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:17:31.029889 systemd-networkd[808]: eth0: Link UP Nov 23 23:17:31.029978 systemd-networkd[808]: eth0: Gained carrier Nov 23 23:17:31.029986 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:17:31.032733 systemd[1]: Reached target network.target - Network. Nov 23 23:17:31.054087 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:17:31.081980 ignition[708]: parsing config with SHA512: dca238fe106afa8f112ead8a9d4843534d1a8654ae927ce8eb67bc8a88263b802cc1c4909f2683c482aa0e94c21865d18338431f2f5e15395b5b0c22cd911637 Nov 23 23:17:31.087527 unknown[708]: fetched base config from "system" Nov 23 23:17:31.088354 unknown[708]: fetched user config from "qemu" Nov 23 23:17:31.088774 ignition[708]: fetch-offline: fetch-offline passed Nov 23 23:17:31.088843 ignition[708]: Ignition finished successfully Nov 23 23:17:31.090757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:17:31.092251 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 23:17:31.093036 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:17:31.129613 ignition[817]: Ignition 2.22.0 Nov 23 23:17:31.129627 ignition[817]: Stage: kargs Nov 23 23:17:31.129768 ignition[817]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:31.129778 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:31.130547 ignition[817]: kargs: kargs passed Nov 23 23:17:31.133073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:17:31.130596 ignition[817]: Ignition finished successfully Nov 23 23:17:31.135578 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:17:31.173650 ignition[825]: Ignition 2.22.0 Nov 23 23:17:31.173666 ignition[825]: Stage: disks Nov 23 23:17:31.173802 ignition[825]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:31.173811 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:31.174555 ignition[825]: disks: disks passed Nov 23 23:17:31.176588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:17:31.174766 ignition[825]: Ignition finished successfully Nov 23 23:17:31.177824 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:17:31.178941 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:17:31.180732 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:17:31.182152 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:17:31.183834 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:17:31.186555 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:17:31.222580 systemd-fsck[836]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:17:31.299080 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:17:31.302632 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:17:31.365029 kernel: EXT4-fs (vda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:17:31.365881 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:17:31.367227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:17:31.371845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:17:31.374024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:17:31.374894 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:17:31.374933 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:17:31.374956 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:17:31.386445 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:17:31.388634 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:17:31.393543 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (844) Nov 23 23:17:31.393573 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:17:31.393584 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:17:31.397013 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:17:31.397044 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:17:31.398148 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:17:31.422796 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:17:31.427027 initrd-setup-root[878]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:17:31.431650 initrd-setup-root[885]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:17:31.434976 initrd-setup-root[892]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:17:31.500406 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:17:31.502821 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:17:31.504431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:17:31.522470 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:17:31.533151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:17:31.547255 ignition[961]: INFO : Ignition 2.22.0 Nov 23 23:17:31.547255 ignition[961]: INFO : Stage: mount Nov 23 23:17:31.548712 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:31.548712 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:31.548712 ignition[961]: INFO : mount: mount passed Nov 23 23:17:31.548712 ignition[961]: INFO : Ignition finished successfully Nov 23 23:17:31.550346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:17:31.552506 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:17:31.853575 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:17:31.855163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:17:31.898519 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (974) Nov 23 23:17:31.898565 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:17:31.898576 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:17:31.902685 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:17:31.902731 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:17:31.904087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:17:31.933784 ignition[991]: INFO : Ignition 2.22.0 Nov 23 23:17:31.933784 ignition[991]: INFO : Stage: files Nov 23 23:17:31.935390 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:31.935390 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:31.935390 ignition[991]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:17:31.938935 ignition[991]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:17:31.938935 ignition[991]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:17:31.938935 ignition[991]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:17:31.938935 ignition[991]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:17:31.945158 ignition[991]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:17:31.941831 unknown[991]: wrote ssh authorized keys file for user: core Nov 23 23:17:31.947551 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:17:31.947551 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 23:17:32.006108 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:17:32.143348 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:17:32.143348 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:17:32.146740 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:17:32.157595 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 23 23:17:32.443172 systemd-networkd[808]: eth0: Gained IPv6LL Nov 23 23:17:32.494959 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:17:32.857546 ignition[991]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 23 23:17:32.857546 ignition[991]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:17:32.862033 ignition[991]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 23:17:32.882996 ignition[991]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:17:32.886399 ignition[991]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:17:32.889388 ignition[991]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 23:17:32.889388 ignition[991]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:17:32.889388 ignition[991]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:17:32.889388 ignition[991]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:17:32.889388 ignition[991]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:17:32.889388 ignition[991]: INFO : files: files passed Nov 23 23:17:32.889388 ignition[991]: INFO : Ignition finished successfully Nov 23 23:17:32.890288 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:17:32.892801 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:17:32.896358 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:17:32.914777 initrd-setup-root-after-ignition[1020]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 23:17:32.913591 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:17:32.913694 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:17:32.918290 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:17:32.918290 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:17:32.923912 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:17:32.919838 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:17:32.921458 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:17:32.925132 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:17:32.977965 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:17:32.978122 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:17:32.980438 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:17:32.981953 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:17:32.984278 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:17:32.985175 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:17:33.008402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:17:33.011587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:17:33.030120 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:17:33.031316 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:17:33.033203 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:17:33.034944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:17:33.035086 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:17:33.037748 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:17:33.040664 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:17:33.041745 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:17:33.043596 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:17:33.045977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:17:33.048145 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:17:33.050263 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:17:33.052193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:17:33.054257 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:17:33.056392 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:17:33.058199 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:17:33.059948 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:17:33.060101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:17:33.063129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:17:33.065165 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:17:33.067177 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:17:33.071094 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:17:33.072279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:17:33.072394 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:17:33.076894 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:17:33.077153 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:17:33.079235 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:17:33.081124 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:17:33.082121 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:17:33.084192 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:17:33.085967 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:17:33.088168 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:17:33.088247 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:17:33.089844 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:17:33.089919 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:17:33.091556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:17:33.091782 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:17:33.093548 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:17:33.093648 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:17:33.095973 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:17:33.098234 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:17:33.099340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:17:33.099450 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:17:33.101653 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:17:33.101771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:17:33.107352 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:17:33.108189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:17:33.116787 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:17:33.124069 ignition[1048]: INFO : Ignition 2.22.0 Nov 23 23:17:33.124069 ignition[1048]: INFO : Stage: umount Nov 23 23:17:33.125658 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:17:33.125658 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:17:33.125658 ignition[1048]: INFO : umount: umount passed Nov 23 23:17:33.125658 ignition[1048]: INFO : Ignition finished successfully Nov 23 23:17:33.127709 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:17:33.127811 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:17:33.129579 systemd[1]: Stopped target network.target - Network. Nov 23 23:17:33.130802 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:17:33.130861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:17:33.132791 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:17:33.132834 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:17:33.134594 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:17:33.134640 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:17:33.136083 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:17:33.136121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:17:33.137959 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:17:33.139588 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:17:33.149814 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:17:33.149932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:17:33.159263 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:17:33.160133 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:17:33.160296 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:17:33.165146 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:17:33.165678 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:17:33.170884 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:17:33.170923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:17:33.174831 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:17:33.178764 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:17:33.178828 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:17:33.182644 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:17:33.182691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:17:33.185901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:17:33.185943 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:17:33.187171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:17:33.187213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:17:33.191229 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:17:33.196379 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:17:33.196437 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:17:33.196716 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:17:33.196820 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:17:33.201133 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:17:33.201214 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:17:33.204967 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:17:33.209178 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:17:33.211116 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:17:33.212052 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:17:33.213925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:17:33.213988 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:17:33.215182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:17:33.215216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:17:33.217049 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:17:33.217099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:17:33.219718 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:17:33.219772 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:17:33.222371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:17:33.222423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:17:33.226240 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:17:33.227661 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:17:33.227719 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:17:33.231136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:17:33.231183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:17:33.234141 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:17:33.234182 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:17:33.237207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:17:33.237250 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:17:33.239078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:17:33.239121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:17:33.242950 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:17:33.242996 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 23:17:33.243042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:17:33.243078 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:17:33.249316 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:17:33.249406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:17:33.250622 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:17:33.253239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:17:33.274307 systemd[1]: Switching root. Nov 23 23:17:33.306690 systemd-journald[245]: Journal stopped Nov 23 23:17:34.132981 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 23:17:34.133109 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:17:34.133124 kernel: SELinux: policy capability open_perms=1 Nov 23 23:17:34.133134 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:17:34.133143 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:17:34.133152 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:17:34.133164 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:17:34.133174 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:17:34.133183 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:17:34.133192 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:17:34.133203 kernel: audit: type=1403 audit(1763939853.473:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:17:34.133219 systemd[1]: Successfully loaded SELinux policy in 46.864ms. Nov 23 23:17:34.133240 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.698ms. Nov 23 23:17:34.133251 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:17:34.133319 systemd[1]: Detected virtualization kvm. Nov 23 23:17:34.133328 systemd[1]: Detected architecture arm64. Nov 23 23:17:34.133342 systemd[1]: Detected first boot. Nov 23 23:17:34.133352 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:17:34.133362 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:17:34.133373 zram_generator::config[1094]: No configuration found. Nov 23 23:17:34.133386 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:17:34.133396 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:17:34.133407 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:17:34.133416 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:17:34.133427 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:17:34.133437 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:17:34.133448 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:17:34.133458 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:17:34.133469 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:17:34.133489 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:17:34.133501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:17:34.133511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:17:34.133521 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:17:34.133532 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:17:34.133542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:17:34.133552 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:17:34.133565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:17:34.133575 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:17:34.133590 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:17:34.133600 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:17:34.133611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:17:34.133621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:17:34.133631 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:17:34.133642 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:17:34.133653 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:17:34.133663 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:17:34.133673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:17:34.133687 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:17:34.133697 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:17:34.133707 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:17:34.133717 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:17:34.133727 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:17:34.133737 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:17:34.133748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:17:34.133758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:17:34.133767 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:17:34.133778 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:17:34.133787 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:17:34.133797 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:17:34.133807 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:17:34.133817 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:17:34.133827 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:17:34.133838 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:17:34.133848 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:17:34.133858 systemd[1]: Reached target machines.target - Containers. Nov 23 23:17:34.133867 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:17:34.133879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:17:34.133889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:17:34.133899 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:17:34.133909 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:17:34.133919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:17:34.133931 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:17:34.133941 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:17:34.133950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:17:34.133961 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:17:34.133971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:17:34.133980 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:17:34.133990 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:17:34.134000 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:17:34.134022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:17:34.134032 kernel: loop: module loaded Nov 23 23:17:34.134053 kernel: fuse: init (API version 7.41) Nov 23 23:17:34.134064 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:17:34.134075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:17:34.134084 kernel: ACPI: bus type drm_connector registered Nov 23 23:17:34.134094 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:17:34.134104 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:17:34.134114 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:17:34.134127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:17:34.134137 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:17:34.134147 systemd[1]: Stopped verity-setup.service. Nov 23 23:17:34.134179 systemd-journald[1168]: Collecting audit messages is disabled. Nov 23 23:17:34.134203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:17:34.134215 systemd-journald[1168]: Journal started Nov 23 23:17:34.134234 systemd-journald[1168]: Runtime Journal (/run/log/journal/73a9475a970f4ffe834c9f107feccabf) is 6M, max 48.5M, 42.4M free. Nov 23 23:17:33.898446 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:17:33.921946 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 23:17:33.922347 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:17:34.136666 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:17:34.138713 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:17:34.139389 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:17:34.140365 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:17:34.141411 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:17:34.142567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:17:34.145019 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:17:34.146280 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:17:34.148325 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:17:34.148509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:17:34.149754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:17:34.149905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:17:34.151188 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:17:34.151331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:17:34.152595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:17:34.152755 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:17:34.154106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:17:34.154258 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:17:34.155577 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:17:34.155721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:17:34.157208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:17:34.158440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:17:34.160352 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:17:34.161726 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:17:34.174486 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:17:34.176734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:17:34.178793 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:17:34.179962 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:17:34.180011 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:17:34.181866 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:17:34.192870 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:17:34.194085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:17:34.195517 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:17:34.197551 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:17:34.198741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:17:34.199766 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:17:34.200951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:17:34.204131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:17:34.206216 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:17:34.206749 systemd-journald[1168]: Time spent on flushing to /var/log/journal/73a9475a970f4ffe834c9f107feccabf is 18.812ms for 887 entries. Nov 23 23:17:34.206749 systemd-journald[1168]: System Journal (/var/log/journal/73a9475a970f4ffe834c9f107feccabf) is 8M, max 195.6M, 187.6M free. Nov 23 23:17:34.233461 systemd-journald[1168]: Received client request to flush runtime journal. Nov 23 23:17:34.233558 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 23:17:34.209183 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:17:34.214094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:17:34.215860 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:17:34.217910 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:17:34.219526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:17:34.223033 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:17:34.226535 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:17:34.238523 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:17:34.245530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:17:34.249039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:17:34.252143 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 23 23:17:34.252161 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 23 23:17:34.258084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:17:34.262776 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:17:34.265528 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:17:34.273030 kernel: loop1: detected capacity change from 0 to 100632 Nov 23 23:17:34.296050 kernel: loop2: detected capacity change from 0 to 200800 Nov 23 23:17:34.296864 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:17:34.301233 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:17:34.321022 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:17:34.322139 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Nov 23 23:17:34.322157 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Nov 23 23:17:34.325300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:17:34.329059 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:17:34.335029 kernel: loop5: detected capacity change from 0 to 200800 Nov 23 23:17:34.339736 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 23:17:34.340213 (sd-merge)[1237]: Merged extensions into '/usr'. Nov 23 23:17:34.344928 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:17:34.344948 systemd[1]: Reloading... Nov 23 23:17:34.406042 zram_generator::config[1261]: No configuration found. Nov 23 23:17:34.481064 ldconfig[1205]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:17:34.562613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:17:34.562886 systemd[1]: Reloading finished in 217 ms. Nov 23 23:17:34.599788 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:17:34.601319 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:17:34.610165 systemd[1]: Starting ensure-sysext.service... Nov 23 23:17:34.611940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:17:34.620610 systemd[1]: Reload requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:17:34.620703 systemd[1]: Reloading... Nov 23 23:17:34.624713 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:17:34.624752 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:17:34.624972 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:17:34.625180 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:17:34.625770 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:17:34.625960 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 23 23:17:34.626000 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 23 23:17:34.628992 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:17:34.629032 systemd-tmpfiles[1299]: Skipping /boot Nov 23 23:17:34.635324 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:17:34.635337 systemd-tmpfiles[1299]: Skipping /boot Nov 23 23:17:34.665062 zram_generator::config[1326]: No configuration found. Nov 23 23:17:34.806866 systemd[1]: Reloading finished in 185 ms. Nov 23 23:17:34.827670 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:17:34.833660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:17:34.841983 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:17:34.844374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:17:34.846599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:17:34.849903 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:17:34.853304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:17:34.856134 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:17:34.865290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:17:34.871425 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:17:34.875456 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:17:34.878918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:17:34.880127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:17:34.882416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:17:34.886283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:17:34.887592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:17:34.887711 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:17:34.892303 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:17:34.894898 systemd-udevd[1370]: Using default interface naming scheme 'v255'. Nov 23 23:17:34.895745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:17:34.897172 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:17:34.898270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:17:34.898379 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:17:34.901075 systemd[1]: Finished ensure-sysext.service. Nov 23 23:17:34.907178 augenrules[1395]: No rules Nov 23 23:17:34.911568 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:17:34.913715 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:17:34.915042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:17:34.916337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:17:34.918223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:17:34.918365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:17:34.919719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:17:34.921164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:17:34.922581 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:17:34.922727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:17:34.926232 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:17:34.927532 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:17:34.928076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:17:34.929364 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:17:34.939172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:17:34.940047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:17:34.940109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:17:34.940129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:17:34.940254 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:17:34.977171 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:17:35.060153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:17:35.061503 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:17:35.062803 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:17:35.063613 systemd-resolved[1365]: Positive Trust Anchors: Nov 23 23:17:35.063629 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:17:35.063659 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:17:35.064862 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:17:35.068706 systemd-networkd[1437]: lo: Link UP Nov 23 23:17:35.068713 systemd-networkd[1437]: lo: Gained carrier Nov 23 23:17:35.069625 systemd-networkd[1437]: Enumeration completed Nov 23 23:17:35.069721 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:17:35.070147 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:17:35.070156 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:17:35.070715 systemd-networkd[1437]: eth0: Link UP Nov 23 23:17:35.070820 systemd-networkd[1437]: eth0: Gained carrier Nov 23 23:17:35.070838 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:17:35.071891 systemd-resolved[1365]: Defaulting to hostname 'linux'. Nov 23 23:17:35.076057 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:17:35.078381 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:17:35.079713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:17:35.080953 systemd[1]: Reached target network.target - Network. Nov 23 23:17:35.081773 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:17:35.082821 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:17:35.084180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:17:35.085099 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:17:35.085346 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:17:35.086241 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Nov 23 23:17:35.086692 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:17:35.086969 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 23:17:35.087102 systemd-timesyncd[1402]: Initial clock synchronization to Sun 2025-11-23 23:17:35.179132 UTC. Nov 23 23:17:35.088273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:17:35.089455 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:17:35.090544 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:17:35.090572 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:17:35.091394 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:17:35.093111 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:17:35.095318 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:17:35.098069 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:17:35.099342 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:17:35.100523 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:17:35.103848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:17:35.105686 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:17:35.109374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:17:35.110790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:17:35.112684 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:17:35.113742 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:17:35.114734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:17:35.114763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:17:35.120207 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:17:35.124266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:17:35.131573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:17:35.136911 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:17:35.139717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:17:35.141902 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:17:35.143162 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:17:35.145466 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:17:35.146412 jq[1474]: false Nov 23 23:17:35.147994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:17:35.150204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:17:35.155681 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:17:35.157758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:17:35.158280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:17:35.159024 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:17:35.159761 extend-filesystems[1475]: Found /dev/vda6 Nov 23 23:17:35.165139 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:17:35.167208 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:17:35.171839 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:17:35.175399 extend-filesystems[1475]: Found /dev/vda9 Nov 23 23:17:35.175781 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:17:35.175960 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:17:35.180478 extend-filesystems[1475]: Checking size of /dev/vda9 Nov 23 23:17:35.180757 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:17:35.182297 jq[1490]: true Nov 23 23:17:35.181201 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:17:35.191121 extend-filesystems[1475]: Resized partition /dev/vda9 Nov 23 23:17:35.196960 extend-filesystems[1510]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:17:35.202363 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:17:35.207634 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 23:17:35.207697 jq[1504]: true Nov 23 23:17:35.208506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:17:35.209103 tar[1500]: linux-arm64/LICENSE Nov 23 23:17:35.209103 tar[1500]: linux-arm64/helm Nov 23 23:17:35.219168 update_engine[1488]: I20251123 23:17:35.218215 1488 main.cc:92] Flatcar Update Engine starting Nov 23 23:17:35.219391 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:17:35.219642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:17:35.231855 dbus-daemon[1471]: [system] SELinux support is enabled Nov 23 23:17:35.232077 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:17:35.234965 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:17:35.235000 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:17:35.236465 update_engine[1488]: I20251123 23:17:35.236415 1488 update_check_scheduler.cc:74] Next update check in 8m33s Nov 23 23:17:35.238861 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:17:35.238886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:17:35.245930 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:17:35.248027 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 23:17:35.248980 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:17:35.261745 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 23:17:35.261745 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 23:17:35.261745 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 23:17:35.266717 extend-filesystems[1475]: Resized filesystem in /dev/vda9 Nov 23 23:17:35.262754 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:17:35.266170 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:17:35.288725 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:17:35.290480 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:17:35.292540 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:17:35.322629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:17:35.327292 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:17:35.337928 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:17:35.338167 systemd-logind[1486]: New seat seat0. Nov 23 23:17:35.339280 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:17:35.413717 containerd[1506]: time="2025-11-23T23:17:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:17:35.415491 containerd[1506]: time="2025-11-23T23:17:35.415441280Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:17:35.424890 containerd[1506]: time="2025-11-23T23:17:35.424847520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.56µs" Nov 23 23:17:35.424890 containerd[1506]: time="2025-11-23T23:17:35.424885640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:17:35.424973 containerd[1506]: time="2025-11-23T23:17:35.424902720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:17:35.425089 containerd[1506]: time="2025-11-23T23:17:35.425065800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:17:35.425089 containerd[1506]: time="2025-11-23T23:17:35.425083160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:17:35.425130 containerd[1506]: time="2025-11-23T23:17:35.425105920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425192 containerd[1506]: time="2025-11-23T23:17:35.425151040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425192 containerd[1506]: time="2025-11-23T23:17:35.425161840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425390 containerd[1506]: time="2025-11-23T23:17:35.425367120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425424 containerd[1506]: time="2025-11-23T23:17:35.425390720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425424 containerd[1506]: time="2025-11-23T23:17:35.425402320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425424 containerd[1506]: time="2025-11-23T23:17:35.425410240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425517 containerd[1506]: time="2025-11-23T23:17:35.425498480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425699 containerd[1506]: time="2025-11-23T23:17:35.425680640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425729 containerd[1506]: time="2025-11-23T23:17:35.425713520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:17:35.425729 containerd[1506]: time="2025-11-23T23:17:35.425725440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:17:35.425817 containerd[1506]: time="2025-11-23T23:17:35.425754600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:17:35.425994 containerd[1506]: time="2025-11-23T23:17:35.425965800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:17:35.426065 containerd[1506]: time="2025-11-23T23:17:35.426050600Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:17:35.429698 containerd[1506]: time="2025-11-23T23:17:35.429663320Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429724480Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429739640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429751560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429762560Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429793120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429806080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:17:35.429820 containerd[1506]: time="2025-11-23T23:17:35.429817960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:17:35.429952 containerd[1506]: time="2025-11-23T23:17:35.429827840Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:17:35.429952 containerd[1506]: time="2025-11-23T23:17:35.429838840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:17:35.429952 containerd[1506]: time="2025-11-23T23:17:35.429847960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:17:35.429952 containerd[1506]: time="2025-11-23T23:17:35.429860760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.429983480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430027280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430044440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430066680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430077400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430091760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430103000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430116440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430127280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430137720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:17:35.430146 containerd[1506]: time="2025-11-23T23:17:35.430147680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:17:35.430674 containerd[1506]: time="2025-11-23T23:17:35.430322080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:17:35.430674 containerd[1506]: time="2025-11-23T23:17:35.430336720Z" level=info msg="Start snapshots syncer" Nov 23 23:17:35.430674 containerd[1506]: time="2025-11-23T23:17:35.430364640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:17:35.430730 containerd[1506]: time="2025-11-23T23:17:35.430654920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:17:35.430730 containerd[1506]: time="2025-11-23T23:17:35.430705720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:17:35.431158 containerd[1506]: time="2025-11-23T23:17:35.431097800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:17:35.431283 containerd[1506]: time="2025-11-23T23:17:35.431251600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:17:35.431314 containerd[1506]: time="2025-11-23T23:17:35.431293520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:17:35.431400 containerd[1506]: time="2025-11-23T23:17:35.431377560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:17:35.431483 containerd[1506]: time="2025-11-23T23:17:35.431436960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:17:35.431518 containerd[1506]: time="2025-11-23T23:17:35.431504680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:17:35.431629 containerd[1506]: time="2025-11-23T23:17:35.431604720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:17:35.431668 containerd[1506]: time="2025-11-23T23:17:35.431628880Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:17:35.431779 containerd[1506]: time="2025-11-23T23:17:35.431696280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:17:35.431779 containerd[1506]: time="2025-11-23T23:17:35.431716880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:17:35.431779 containerd[1506]: time="2025-11-23T23:17:35.431735000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:17:35.431848 containerd[1506]: time="2025-11-23T23:17:35.431781400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:17:35.431848 containerd[1506]: time="2025-11-23T23:17:35.431804800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:17:35.431848 containerd[1506]: time="2025-11-23T23:17:35.431815880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:17:35.431848 containerd[1506]: time="2025-11-23T23:17:35.431829120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:17:35.431848 containerd[1506]: time="2025-11-23T23:17:35.431840480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:17:35.431935 containerd[1506]: time="2025-11-23T23:17:35.431851680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:17:35.431935 containerd[1506]: time="2025-11-23T23:17:35.431867240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:17:35.432162 containerd[1506]: time="2025-11-23T23:17:35.431962800Z" level=info msg="runtime interface created" Nov 23 23:17:35.432162 containerd[1506]: time="2025-11-23T23:17:35.431980360Z" level=info msg="created NRI interface" Nov 23 23:17:35.432162 containerd[1506]: time="2025-11-23T23:17:35.431994120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:17:35.432162 containerd[1506]: time="2025-11-23T23:17:35.432028480Z" level=info msg="Connect containerd service" Nov 23 23:17:35.432162 containerd[1506]: time="2025-11-23T23:17:35.432065680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:17:35.433604 containerd[1506]: time="2025-11-23T23:17:35.433555240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:17:35.509580 containerd[1506]: time="2025-11-23T23:17:35.509514920Z" level=info msg="Start subscribing containerd event" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509723280Z" level=info msg="Start recovering state" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509835040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509842400Z" level=info msg="Start event monitor" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509884040Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509890080Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509899880Z" level=info msg="Start streaming server" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509908560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:17:35.509852 containerd[1506]: time="2025-11-23T23:17:35.509914800Z" level=info msg="runtime interface starting up..." Nov 23 23:17:35.511371 containerd[1506]: time="2025-11-23T23:17:35.509920400Z" level=info msg="starting plugins..." Nov 23 23:17:35.511371 containerd[1506]: time="2025-11-23T23:17:35.511182840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:17:35.511371 containerd[1506]: time="2025-11-23T23:17:35.511355320Z" level=info msg="containerd successfully booted in 0.098230s" Nov 23 23:17:35.511459 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:17:35.552262 tar[1500]: linux-arm64/README.md Nov 23 23:17:35.570455 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:17:35.642863 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:17:35.662138 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:17:35.664604 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:17:35.682817 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:17:35.683063 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:17:35.685575 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:17:35.702936 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:17:35.705672 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:17:35.707719 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:17:35.709130 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:17:36.475317 systemd-networkd[1437]: eth0: Gained IPv6LL Nov 23 23:17:36.477707 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:17:36.479444 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:17:36.481920 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 23:17:36.484394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:17:36.494735 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:17:36.510665 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 23:17:36.512102 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 23:17:36.513605 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:17:36.516257 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:17:37.061448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:37.062971 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:17:37.065088 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:17:37.065390 systemd[1]: Startup finished in 2.093s (kernel) + 4.873s (initrd) + 3.639s (userspace) = 10.606s. Nov 23 23:17:37.375621 kubelet[1615]: E1123 23:17:37.375553 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:17:37.379298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:17:37.379447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:17:37.379790 systemd[1]: kubelet.service: Consumed 684ms CPU time, 248.8M memory peak. Nov 23 23:17:41.856205 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:17:41.857211 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:37234.service - OpenSSH per-connection server daemon (10.0.0.1:37234). Nov 23 23:17:41.953022 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 37234 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:41.955810 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:41.963494 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:17:41.964438 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:17:41.972247 systemd-logind[1486]: New session 1 of user core. Nov 23 23:17:41.990469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:17:41.994390 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:17:42.018323 (systemd)[1633]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:17:42.020934 systemd-logind[1486]: New session c1 of user core. Nov 23 23:17:42.115455 systemd[1633]: Queued start job for default target default.target. Nov 23 23:17:42.125100 systemd[1633]: Created slice app.slice - User Application Slice. Nov 23 23:17:42.125125 systemd[1633]: Reached target paths.target - Paths. Nov 23 23:17:42.125167 systemd[1633]: Reached target timers.target - Timers. Nov 23 23:17:42.126386 systemd[1633]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:17:42.135934 systemd[1633]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:17:42.136021 systemd[1633]: Reached target sockets.target - Sockets. Nov 23 23:17:42.136061 systemd[1633]: Reached target basic.target - Basic System. Nov 23 23:17:42.136089 systemd[1633]: Reached target default.target - Main User Target. Nov 23 23:17:42.136112 systemd[1633]: Startup finished in 109ms. Nov 23 23:17:42.136259 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:17:42.137844 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:17:42.211199 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:37246.service - OpenSSH per-connection server daemon (10.0.0.1:37246). Nov 23 23:17:42.275202 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 37246 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:42.276635 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:42.281143 systemd-logind[1486]: New session 2 of user core. Nov 23 23:17:42.305259 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:17:42.357130 sshd[1647]: Connection closed by 10.0.0.1 port 37246 Nov 23 23:17:42.357593 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Nov 23 23:17:42.368925 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:37246.service: Deactivated successfully. Nov 23 23:17:42.370545 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:17:42.372851 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:17:42.383674 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Nov 23 23:17:42.384336 systemd-logind[1486]: Removed session 2. Nov 23 23:17:42.447286 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:42.448592 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:42.454053 systemd-logind[1486]: New session 3 of user core. Nov 23 23:17:42.461255 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:17:42.510381 sshd[1656]: Connection closed by 10.0.0.1 port 37254 Nov 23 23:17:42.510815 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Nov 23 23:17:42.523973 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:37254.service: Deactivated successfully. Nov 23 23:17:42.525438 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:17:42.527549 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:17:42.529689 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Nov 23 23:17:42.530339 systemd-logind[1486]: Removed session 3. Nov 23 23:17:42.596297 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:42.597728 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:42.602614 systemd-logind[1486]: New session 4 of user core. Nov 23 23:17:42.613225 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:17:42.665882 sshd[1665]: Connection closed by 10.0.0.1 port 37268 Nov 23 23:17:42.666218 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Nov 23 23:17:42.680561 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:37268.service: Deactivated successfully. Nov 23 23:17:42.683450 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:17:42.684262 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:17:42.686679 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:37284.service - OpenSSH per-connection server daemon (10.0.0.1:37284). Nov 23 23:17:42.687301 systemd-logind[1486]: Removed session 4. Nov 23 23:17:42.744434 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 37284 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:42.745800 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:42.750438 systemd-logind[1486]: New session 5 of user core. Nov 23 23:17:42.756327 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:17:42.813435 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:17:42.813724 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:17:42.827151 sudo[1675]: pam_unix(sudo:session): session closed for user root Nov 23 23:17:42.828926 sshd[1674]: Connection closed by 10.0.0.1 port 37284 Nov 23 23:17:42.829619 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Nov 23 23:17:42.848429 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:37284.service: Deactivated successfully. Nov 23 23:17:42.852185 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:17:42.852970 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:17:42.855444 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Nov 23 23:17:42.855882 systemd-logind[1486]: Removed session 5. Nov 23 23:17:42.924563 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:42.926143 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:42.931672 systemd-logind[1486]: New session 6 of user core. Nov 23 23:17:42.941202 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:17:42.995576 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:17:42.996463 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:17:43.088664 sudo[1686]: pam_unix(sudo:session): session closed for user root Nov 23 23:17:43.093710 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:17:43.093971 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:17:43.104689 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:17:43.153125 augenrules[1708]: No rules Nov 23 23:17:43.153801 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:17:43.154003 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:17:43.155100 sudo[1685]: pam_unix(sudo:session): session closed for user root Nov 23 23:17:43.156447 sshd[1684]: Connection closed by 10.0.0.1 port 37294 Nov 23 23:17:43.156829 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Nov 23 23:17:43.164913 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:37294.service: Deactivated successfully. Nov 23 23:17:43.167892 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:17:43.168726 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:17:43.171396 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:37308.service - OpenSSH per-connection server daemon (10.0.0.1:37308). Nov 23 23:17:43.172542 systemd-logind[1486]: Removed session 6. Nov 23 23:17:43.231932 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 37308 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:17:43.233621 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:17:43.237918 systemd-logind[1486]: New session 7 of user core. Nov 23 23:17:43.246205 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:17:43.296315 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:17:43.296582 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:17:43.584217 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:17:43.603386 (dockerd)[1741]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:17:43.801936 dockerd[1741]: time="2025-11-23T23:17:43.801865268Z" level=info msg="Starting up" Nov 23 23:17:43.802722 dockerd[1741]: time="2025-11-23T23:17:43.802689883Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:17:43.813858 dockerd[1741]: time="2025-11-23T23:17:43.813806883Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:17:43.859473 dockerd[1741]: time="2025-11-23T23:17:43.859363703Z" level=info msg="Loading containers: start." Nov 23 23:17:43.872044 kernel: Initializing XFRM netlink socket Nov 23 23:17:44.079113 systemd-networkd[1437]: docker0: Link UP Nov 23 23:17:44.083119 dockerd[1741]: time="2025-11-23T23:17:44.082943742Z" level=info msg="Loading containers: done." Nov 23 23:17:44.095352 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2260315609-merged.mount: Deactivated successfully. Nov 23 23:17:44.099788 dockerd[1741]: time="2025-11-23T23:17:44.099740159Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:17:44.099874 dockerd[1741]: time="2025-11-23T23:17:44.099834322Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:17:44.099945 dockerd[1741]: time="2025-11-23T23:17:44.099927642Z" level=info msg="Initializing buildkit" Nov 23 23:17:44.121687 dockerd[1741]: time="2025-11-23T23:17:44.121581872Z" level=info msg="Completed buildkit initialization" Nov 23 23:17:44.129882 dockerd[1741]: time="2025-11-23T23:17:44.129824407Z" level=info msg="Daemon has completed initialization" Nov 23 23:17:44.130065 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:17:44.130426 dockerd[1741]: time="2025-11-23T23:17:44.129921861Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:17:44.573616 containerd[1506]: time="2025-11-23T23:17:44.573183505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 23 23:17:45.195057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588523325.mount: Deactivated successfully. Nov 23 23:17:46.143233 containerd[1506]: time="2025-11-23T23:17:46.143187298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:46.144469 containerd[1506]: time="2025-11-23T23:17:46.144418488Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=24563046" Nov 23 23:17:46.145334 containerd[1506]: time="2025-11-23T23:17:46.145307408Z" level=info msg="ImageCreate event name:\"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:46.147587 containerd[1506]: time="2025-11-23T23:17:46.147553177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:46.149533 containerd[1506]: time="2025-11-23T23:17:46.149369612Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"24559643\" in 1.576141437s" Nov 23 23:17:46.149533 containerd[1506]: time="2025-11-23T23:17:46.149407686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7\"" Nov 23 23:17:46.150010 containerd[1506]: time="2025-11-23T23:17:46.149983622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 23 23:17:47.133313 containerd[1506]: time="2025-11-23T23:17:47.133216879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:47.133837 containerd[1506]: time="2025-11-23T23:17:47.133799686Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=19134214" Nov 23 23:17:47.134844 containerd[1506]: time="2025-11-23T23:17:47.134809460Z" level=info msg="ImageCreate event name:\"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:47.137738 containerd[1506]: time="2025-11-23T23:17:47.137680093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:47.139132 containerd[1506]: time="2025-11-23T23:17:47.138675226Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"20718696\" in 988.648475ms" Nov 23 23:17:47.139132 containerd[1506]: time="2025-11-23T23:17:47.138709322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2\"" Nov 23 23:17:47.139506 containerd[1506]: time="2025-11-23T23:17:47.139448170Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 23 23:17:47.624526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:17:47.626276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:17:47.777689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:47.791406 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:17:47.830503 kubelet[2026]: E1123 23:17:47.830427 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:17:47.833635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:17:47.833772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:17:47.835103 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.2M memory peak. Nov 23 23:17:48.121907 containerd[1506]: time="2025-11-23T23:17:48.121797824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:48.122718 containerd[1506]: time="2025-11-23T23:17:48.122685216Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=14191285" Nov 23 23:17:48.123661 containerd[1506]: time="2025-11-23T23:17:48.123634892Z" level=info msg="ImageCreate event name:\"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:48.126697 containerd[1506]: time="2025-11-23T23:17:48.126656219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:48.128280 containerd[1506]: time="2025-11-23T23:17:48.128245712Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"15775785\" in 988.761161ms" Nov 23 23:17:48.128354 containerd[1506]: time="2025-11-23T23:17:48.128283893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949\"" Nov 23 23:17:48.128778 containerd[1506]: time="2025-11-23T23:17:48.128715837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 23 23:17:49.065492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286697747.mount: Deactivated successfully. Nov 23 23:17:49.336544 containerd[1506]: time="2025-11-23T23:17:49.336396924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:49.337788 containerd[1506]: time="2025-11-23T23:17:49.337745315Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=22803243" Nov 23 23:17:49.338547 containerd[1506]: time="2025-11-23T23:17:49.338512020Z" level=info msg="ImageCreate event name:\"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:49.342207 containerd[1506]: time="2025-11-23T23:17:49.342167182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:49.343077 containerd[1506]: time="2025-11-23T23:17:49.342759975Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"22802260\" in 1.213973512s" Nov 23 23:17:49.343077 containerd[1506]: time="2025-11-23T23:17:49.342789729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786\"" Nov 23 23:17:49.343404 containerd[1506]: time="2025-11-23T23:17:49.343379916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 23 23:17:49.816351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560168195.mount: Deactivated successfully. Nov 23 23:17:50.951141 containerd[1506]: time="2025-11-23T23:17:50.951028792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:50.953681 containerd[1506]: time="2025-11-23T23:17:50.953649578Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 23 23:17:50.955514 containerd[1506]: time="2025-11-23T23:17:50.955484093Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:50.958826 containerd[1506]: time="2025-11-23T23:17:50.958785024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:50.959736 containerd[1506]: time="2025-11-23T23:17:50.959698071Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.6162874s" Nov 23 23:17:50.959736 containerd[1506]: time="2025-11-23T23:17:50.959729264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 23 23:17:50.960237 containerd[1506]: time="2025-11-23T23:17:50.960194267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 23 23:17:51.406225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175997703.mount: Deactivated successfully. Nov 23 23:17:51.413664 containerd[1506]: time="2025-11-23T23:17:51.413594509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:51.415305 containerd[1506]: time="2025-11-23T23:17:51.415257101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 23 23:17:51.416999 containerd[1506]: time="2025-11-23T23:17:51.416947554Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:51.422241 containerd[1506]: time="2025-11-23T23:17:51.421666062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:51.423075 containerd[1506]: time="2025-11-23T23:17:51.422971715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 462.732867ms" Nov 23 23:17:51.423075 containerd[1506]: time="2025-11-23T23:17:51.423018336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 23 23:17:51.423920 containerd[1506]: time="2025-11-23T23:17:51.423879418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 23 23:17:51.939787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232132938.mount: Deactivated successfully. Nov 23 23:17:53.914581 containerd[1506]: time="2025-11-23T23:17:53.914525238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:53.917461 containerd[1506]: time="2025-11-23T23:17:53.917418234Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Nov 23 23:17:53.921416 containerd[1506]: time="2025-11-23T23:17:53.921362289Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:53.925913 containerd[1506]: time="2025-11-23T23:17:53.925861650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:17:53.926906 containerd[1506]: time="2025-11-23T23:17:53.926875397Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.502968562s" Nov 23 23:17:53.926954 containerd[1506]: time="2025-11-23T23:17:53.926912589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 23 23:17:57.875441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:17:57.878863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:17:58.039499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:58.056328 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:17:58.092609 kubelet[2188]: E1123 23:17:58.092544 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:17:58.095084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:17:58.095218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:17:58.096035 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108M memory peak. Nov 23 23:17:59.441022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:59.441466 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108M memory peak. Nov 23 23:17:59.444280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:17:59.469798 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Nov 23 23:17:59.469846 systemd[1]: Reloading... Nov 23 23:17:59.539087 zram_generator::config[2246]: No configuration found. Nov 23 23:17:59.743597 systemd[1]: Reloading finished in 273 ms. Nov 23 23:17:59.798500 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 23:17:59.798730 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 23:17:59.799126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:59.799236 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95M memory peak. Nov 23 23:17:59.800852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:17:59.920646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:17:59.924674 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:17:59.956959 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:17:59.956959 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:17:59.957286 kubelet[2291]: I1123 23:17:59.956996 2291 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:18:01.471503 kubelet[2291]: I1123 23:18:01.471454 2291 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 23:18:01.471503 kubelet[2291]: I1123 23:18:01.471488 2291 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:18:01.472640 kubelet[2291]: I1123 23:18:01.472604 2291 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 23:18:01.472640 kubelet[2291]: I1123 23:18:01.472630 2291 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:18:01.472907 kubelet[2291]: I1123 23:18:01.472881 2291 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:18:01.549175 kubelet[2291]: E1123 23:18:01.549139 2291 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 23:18:01.549460 kubelet[2291]: I1123 23:18:01.549446 2291 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:18:01.553223 kubelet[2291]: I1123 23:18:01.553204 2291 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:18:01.555839 kubelet[2291]: I1123 23:18:01.555820 2291 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 23:18:01.556076 kubelet[2291]: I1123 23:18:01.556051 2291 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:18:01.556219 kubelet[2291]: I1123 23:18:01.556077 2291 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:18:01.556219 kubelet[2291]: I1123 23:18:01.556218 2291 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:18:01.556325 kubelet[2291]: I1123 23:18:01.556225 2291 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 23:18:01.556325 kubelet[2291]: I1123 23:18:01.556323 2291 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 23:18:01.558665 kubelet[2291]: I1123 23:18:01.558647 2291 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:18:01.559825 kubelet[2291]: I1123 23:18:01.559735 2291 kubelet.go:475] "Attempting to sync node with API server" Nov 23 23:18:01.559825 kubelet[2291]: I1123 23:18:01.559758 2291 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:18:01.560230 kubelet[2291]: I1123 23:18:01.560205 2291 kubelet.go:387] "Adding apiserver pod source" Nov 23 23:18:01.560432 kubelet[2291]: I1123 23:18:01.560408 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:18:01.560986 kubelet[2291]: E1123 23:18:01.560945 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:18:01.561108 kubelet[2291]: E1123 23:18:01.561082 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 23:18:01.561823 kubelet[2291]: I1123 23:18:01.561802 2291 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:18:01.562556 kubelet[2291]: I1123 23:18:01.562533 2291 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:18:01.562634 kubelet[2291]: I1123 23:18:01.562624 2291 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 23:18:01.562704 kubelet[2291]: W1123 23:18:01.562696 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:18:01.565263 kubelet[2291]: I1123 23:18:01.565246 2291 server.go:1262] "Started kubelet" Nov 23 23:18:01.566390 kubelet[2291]: I1123 23:18:01.566332 2291 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:18:01.566390 kubelet[2291]: I1123 23:18:01.566392 2291 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 23:18:01.566710 kubelet[2291]: I1123 23:18:01.566689 2291 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:18:01.566846 kubelet[2291]: I1123 23:18:01.566828 2291 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:18:01.567096 kubelet[2291]: I1123 23:18:01.567079 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:18:01.567930 kubelet[2291]: I1123 23:18:01.567630 2291 server.go:310] "Adding debug handlers to kubelet server" Nov 23 23:18:01.569958 kubelet[2291]: I1123 23:18:01.569775 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:18:01.570646 kubelet[2291]: E1123 23:18:01.569342 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac5f9fdd6d245 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:18:01.565213253 +0000 UTC m=+1.637624292,LastTimestamp:2025-11-23 23:18:01.565213253 +0000 UTC m=+1.637624292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:18:01.570867 kubelet[2291]: E1123 23:18:01.570848 2291 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:18:01.570896 kubelet[2291]: I1123 23:18:01.570879 2291 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 23:18:01.571702 kubelet[2291]: I1123 23:18:01.571103 2291 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 23:18:01.571702 kubelet[2291]: I1123 23:18:01.571155 2291 reconciler.go:29] "Reconciler: start to sync state" Nov 23 23:18:01.571702 kubelet[2291]: E1123 23:18:01.571176 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Nov 23 23:18:01.571814 kubelet[2291]: E1123 23:18:01.571764 2291 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:18:01.572034 kubelet[2291]: I1123 23:18:01.571991 2291 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:18:01.572539 kubelet[2291]: E1123 23:18:01.572434 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:18:01.573196 kubelet[2291]: I1123 23:18:01.573142 2291 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:18:01.573196 kubelet[2291]: I1123 23:18:01.573155 2291 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:18:01.575251 kubelet[2291]: I1123 23:18:01.575116 2291 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 23:18:01.584245 kubelet[2291]: I1123 23:18:01.584218 2291 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:18:01.584245 kubelet[2291]: I1123 23:18:01.584240 2291 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:18:01.584362 kubelet[2291]: I1123 23:18:01.584259 2291 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:18:01.586100 kubelet[2291]: I1123 23:18:01.586073 2291 policy_none.go:49] "None policy: Start" Nov 23 23:18:01.586185 kubelet[2291]: I1123 23:18:01.586110 2291 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 23:18:01.586185 kubelet[2291]: I1123 23:18:01.586122 2291 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 23:18:01.588438 kubelet[2291]: I1123 23:18:01.587897 2291 policy_none.go:47] "Start" Nov 23 23:18:01.591341 kubelet[2291]: I1123 23:18:01.591305 2291 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 23:18:01.591341 kubelet[2291]: I1123 23:18:01.591340 2291 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 23:18:01.591412 kubelet[2291]: I1123 23:18:01.591377 2291 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 23:18:01.591447 kubelet[2291]: E1123 23:18:01.591415 2291 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:18:01.592532 kubelet[2291]: E1123 23:18:01.592493 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:18:01.595275 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:18:01.608939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:18:01.612250 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:18:01.628159 kubelet[2291]: E1123 23:18:01.628116 2291 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:18:01.628402 kubelet[2291]: I1123 23:18:01.628368 2291 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:18:01.628622 kubelet[2291]: I1123 23:18:01.628385 2291 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:18:01.628622 kubelet[2291]: I1123 23:18:01.628594 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:18:01.629541 kubelet[2291]: E1123 23:18:01.629511 2291 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:18:01.629541 kubelet[2291]: E1123 23:18:01.629551 2291 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 23:18:01.701944 systemd[1]: Created slice kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice - libcontainer container kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice. Nov 23 23:18:01.716860 kubelet[2291]: E1123 23:18:01.716800 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:01.719362 systemd[1]: Created slice kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice - libcontainer container kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice. Nov 23 23:18:01.722443 kubelet[2291]: E1123 23:18:01.721429 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:01.724245 systemd[1]: Created slice kubepods-burstable-pod7fbb5e1fa99efe4c518ab1448664ae9a.slice - libcontainer container kubepods-burstable-pod7fbb5e1fa99efe4c518ab1448664ae9a.slice. Nov 23 23:18:01.726084 kubelet[2291]: E1123 23:18:01.726059 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:01.730107 kubelet[2291]: I1123 23:18:01.730083 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:18:01.730557 kubelet[2291]: E1123 23:18:01.730515 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Nov 23 23:18:01.772114 kubelet[2291]: E1123 23:18:01.772061 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Nov 23 23:18:01.773131 kubelet[2291]: I1123 23:18:01.773091 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:01.773131 kubelet[2291]: I1123 23:18:01.773125 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:01.773210 kubelet[2291]: I1123 23:18:01.773145 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:01.773210 kubelet[2291]: I1123 23:18:01.773159 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:01.773210 kubelet[2291]: I1123 23:18:01.773183 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:01.773210 kubelet[2291]: I1123 23:18:01.773199 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:01.773285 kubelet[2291]: I1123 23:18:01.773217 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:01.773285 kubelet[2291]: I1123 23:18:01.773247 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:01.773285 kubelet[2291]: I1123 23:18:01.773277 2291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:01.931713 kubelet[2291]: I1123 23:18:01.931677 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:18:01.932048 kubelet[2291]: E1123 23:18:01.931988 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Nov 23 23:18:02.026651 containerd[1506]: time="2025-11-23T23:18:02.026537696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:02.028412 containerd[1506]: time="2025-11-23T23:18:02.028381398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:02.030628 containerd[1506]: time="2025-11-23T23:18:02.030500757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7fbb5e1fa99efe4c518ab1448664ae9a,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:02.172589 kubelet[2291]: E1123 23:18:02.172521 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Nov 23 23:18:02.334096 kubelet[2291]: I1123 23:18:02.333865 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:18:02.334753 kubelet[2291]: E1123 23:18:02.334696 2291 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Nov 23 23:18:02.463223 kubelet[2291]: E1123 23:18:02.461564 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:18:02.467160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264034449.mount: Deactivated successfully. Nov 23 23:18:02.475122 containerd[1506]: time="2025-11-23T23:18:02.475077906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:18:02.475818 containerd[1506]: time="2025-11-23T23:18:02.475789952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 23:18:02.477370 containerd[1506]: time="2025-11-23T23:18:02.477264577Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:18:02.478850 containerd[1506]: time="2025-11-23T23:18:02.478777885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:18:02.479326 containerd[1506]: time="2025-11-23T23:18:02.479299606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 23:18:02.480910 containerd[1506]: time="2025-11-23T23:18:02.480838701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:18:02.484188 containerd[1506]: time="2025-11-23T23:18:02.481744194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:18:02.484188 containerd[1506]: time="2025-11-23T23:18:02.482475781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 23 23:18:02.484188 containerd[1506]: time="2025-11-23T23:18:02.482522672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 454.092821ms" Nov 23 23:18:02.485760 containerd[1506]: time="2025-11-23T23:18:02.485416704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 454.240139ms" Nov 23 23:18:02.491253 containerd[1506]: time="2025-11-23T23:18:02.491198962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.894229ms" Nov 23 23:18:02.506723 containerd[1506]: time="2025-11-23T23:18:02.506659907Z" level=info msg="connecting to shim ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70" address="unix:///run/containerd/s/9a11faf676a4241eb8125dd92fddc962fe54ab0df61cf3e6621a0432fe724909" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:02.520135 containerd[1506]: time="2025-11-23T23:18:02.519534872Z" level=info msg="connecting to shim 0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7" address="unix:///run/containerd/s/f6ded2da912987dc767eafda896ed148022d1909d2d35bcb026b51e289f01732" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:02.525812 containerd[1506]: time="2025-11-23T23:18:02.525773741Z" level=info msg="connecting to shim 9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea" address="unix:///run/containerd/s/0e6bfd0a31b58f7c6607bdecb4ea18e9f0226de676b69354aba172bba986a60f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:02.539196 systemd[1]: Started cri-containerd-ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70.scope - libcontainer container ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70. Nov 23 23:18:02.542218 systemd[1]: Started cri-containerd-0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7.scope - libcontainer container 0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7. Nov 23 23:18:02.562186 systemd[1]: Started cri-containerd-9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea.scope - libcontainer container 9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea. Nov 23 23:18:02.591500 kubelet[2291]: E1123 23:18:02.591392 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:18:02.606440 containerd[1506]: time="2025-11-23T23:18:02.606380460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70\"" Nov 23 23:18:02.607204 containerd[1506]: time="2025-11-23T23:18:02.607172872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7\"" Nov 23 23:18:02.608607 containerd[1506]: time="2025-11-23T23:18:02.608566170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7fbb5e1fa99efe4c518ab1448664ae9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea\"" Nov 23 23:18:02.612814 containerd[1506]: time="2025-11-23T23:18:02.612694330Z" level=info msg="CreateContainer within sandbox \"0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:18:02.613482 containerd[1506]: time="2025-11-23T23:18:02.613459673Z" level=info msg="CreateContainer within sandbox \"ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:18:02.615128 containerd[1506]: time="2025-11-23T23:18:02.615077172Z" level=info msg="CreateContainer within sandbox \"9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:18:02.622544 containerd[1506]: time="2025-11-23T23:18:02.622499073Z" level=info msg="Container 37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:02.624556 containerd[1506]: time="2025-11-23T23:18:02.624509835Z" level=info msg="Container 7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:02.626342 containerd[1506]: time="2025-11-23T23:18:02.626315537Z" level=info msg="Container d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:02.631431 containerd[1506]: time="2025-11-23T23:18:02.631391876Z" level=info msg="CreateContainer within sandbox \"0b3d2d625af5bd6af80ef0454fc38ced00076708b4b44fafe5bd3b0252bb5ea7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed\"" Nov 23 23:18:02.632112 containerd[1506]: time="2025-11-23T23:18:02.632083539Z" level=info msg="StartContainer for \"37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed\"" Nov 23 23:18:02.633500 containerd[1506]: time="2025-11-23T23:18:02.633470631Z" level=info msg="connecting to shim 37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed" address="unix:///run/containerd/s/f6ded2da912987dc767eafda896ed148022d1909d2d35bcb026b51e289f01732" protocol=ttrpc version=3 Nov 23 23:18:02.634506 containerd[1506]: time="2025-11-23T23:18:02.634413004Z" level=info msg="CreateContainer within sandbox \"ea50849077e72d3261af5da6abbd1c11d29b166128632797fbea25c818e40a70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b\"" Nov 23 23:18:02.634908 containerd[1506]: time="2025-11-23T23:18:02.634860606Z" level=info msg="StartContainer for \"7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b\"" Nov 23 23:18:02.636023 containerd[1506]: time="2025-11-23T23:18:02.635975244Z" level=info msg="CreateContainer within sandbox \"9e7f000184eba5cd539dbc1bfd0e3c6190bfa24e96d0ce8afc737676eb8ea0ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda\"" Nov 23 23:18:02.636321 containerd[1506]: time="2025-11-23T23:18:02.636100779Z" level=info msg="connecting to shim 7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b" address="unix:///run/containerd/s/9a11faf676a4241eb8125dd92fddc962fe54ab0df61cf3e6621a0432fe724909" protocol=ttrpc version=3 Nov 23 23:18:02.636422 containerd[1506]: time="2025-11-23T23:18:02.636397779Z" level=info msg="StartContainer for \"d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda\"" Nov 23 23:18:02.638368 containerd[1506]: time="2025-11-23T23:18:02.638329456Z" level=info msg="connecting to shim d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda" address="unix:///run/containerd/s/0e6bfd0a31b58f7c6607bdecb4ea18e9f0226de676b69354aba172bba986a60f" protocol=ttrpc version=3 Nov 23 23:18:02.656214 systemd[1]: Started cri-containerd-37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed.scope - libcontainer container 37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed. Nov 23 23:18:02.660874 systemd[1]: Started cri-containerd-7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b.scope - libcontainer container 7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b. Nov 23 23:18:02.662382 systemd[1]: Started cri-containerd-d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda.scope - libcontainer container d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda. Nov 23 23:18:02.686801 kubelet[2291]: E1123 23:18:02.686754 2291 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:18:02.710190 containerd[1506]: time="2025-11-23T23:18:02.709479646Z" level=info msg="StartContainer for \"37bc9b12d1ec21d8cfac86996b6885d72756393715f342fa573ac2ccf9c7c3ed\" returns successfully" Nov 23 23:18:02.719180 containerd[1506]: time="2025-11-23T23:18:02.719058066Z" level=info msg="StartContainer for \"d545cdb640b6802923be4ed54903f7c784119f1ceceaccc521d8e9bccd582dda\" returns successfully" Nov 23 23:18:02.724176 containerd[1506]: time="2025-11-23T23:18:02.724078745Z" level=info msg="StartContainer for \"7505e17693ab8a7d62c97948f851c3dc2f80c86df45fc50a98f123ede937491b\" returns successfully" Nov 23 23:18:03.136114 kubelet[2291]: I1123 23:18:03.136064 2291 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:18:03.600659 kubelet[2291]: E1123 23:18:03.600560 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:03.602140 kubelet[2291]: E1123 23:18:03.602115 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:03.608163 kubelet[2291]: E1123 23:18:03.608132 2291 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:18:04.342222 kubelet[2291]: E1123 23:18:04.342180 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 23:18:04.540543 kubelet[2291]: I1123 23:18:04.540393 2291 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:18:04.540543 kubelet[2291]: E1123 23:18:04.540436 2291 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 23 23:18:04.562893 kubelet[2291]: I1123 23:18:04.562554 2291 apiserver.go:52] "Watching apiserver" Nov 23 23:18:04.571662 kubelet[2291]: I1123 23:18:04.571584 2291 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 23:18:04.571662 kubelet[2291]: I1123 23:18:04.571623 2291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:04.580043 kubelet[2291]: E1123 23:18:04.578670 2291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:04.580043 kubelet[2291]: I1123 23:18:04.578702 2291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:04.582103 kubelet[2291]: E1123 23:18:04.582063 2291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:04.582444 kubelet[2291]: I1123 23:18:04.582213 2291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:04.584868 kubelet[2291]: E1123 23:18:04.584838 2291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:04.606922 kubelet[2291]: I1123 23:18:04.606556 2291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:04.606922 kubelet[2291]: I1123 23:18:04.606677 2291 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:04.610039 kubelet[2291]: E1123 23:18:04.609356 2291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:04.610623 kubelet[2291]: E1123 23:18:04.610594 2291 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:06.874244 systemd[1]: Reload requested from client PID 2582 ('systemctl') (unit session-7.scope)... Nov 23 23:18:06.874535 systemd[1]: Reloading... Nov 23 23:18:06.960034 zram_generator::config[2628]: No configuration found. Nov 23 23:18:07.165794 systemd[1]: Reloading finished in 290 ms. Nov 23 23:18:07.192251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:18:07.206467 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:18:07.206859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:18:07.207104 systemd[1]: kubelet.service: Consumed 1.921s CPU time, 121.9M memory peak. Nov 23 23:18:07.209612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:18:07.364861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:18:07.369181 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:18:07.417475 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:18:07.417475 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:18:07.417475 kubelet[2667]: I1123 23:18:07.417441 2667 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:18:07.424221 kubelet[2667]: I1123 23:18:07.424187 2667 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 23 23:18:07.424221 kubelet[2667]: I1123 23:18:07.424213 2667 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:18:07.424346 kubelet[2667]: I1123 23:18:07.424239 2667 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 23 23:18:07.424346 kubelet[2667]: I1123 23:18:07.424246 2667 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:18:07.424438 kubelet[2667]: I1123 23:18:07.424420 2667 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:18:07.425603 kubelet[2667]: I1123 23:18:07.425577 2667 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 23:18:07.427755 kubelet[2667]: I1123 23:18:07.427719 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:18:07.430824 kubelet[2667]: I1123 23:18:07.430799 2667 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:18:07.434996 kubelet[2667]: I1123 23:18:07.434895 2667 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 23 23:18:07.435442 kubelet[2667]: I1123 23:18:07.435414 2667 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:18:07.435636 kubelet[2667]: I1123 23:18:07.435495 2667 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:18:07.435763 kubelet[2667]: I1123 23:18:07.435749 2667 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:18:07.435812 kubelet[2667]: I1123 23:18:07.435805 2667 container_manager_linux.go:306] "Creating device plugin manager" Nov 23 23:18:07.435881 kubelet[2667]: I1123 23:18:07.435872 2667 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 23 23:18:07.436812 kubelet[2667]: I1123 23:18:07.436789 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:18:07.437063 kubelet[2667]: I1123 23:18:07.437041 2667 kubelet.go:475] "Attempting to sync node with API server" Nov 23 23:18:07.437148 kubelet[2667]: I1123 23:18:07.437137 2667 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:18:07.437216 kubelet[2667]: I1123 23:18:07.437207 2667 kubelet.go:387] "Adding apiserver pod source" Nov 23 23:18:07.437273 kubelet[2667]: I1123 23:18:07.437265 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:18:07.438306 kubelet[2667]: I1123 23:18:07.438275 2667 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:18:07.439145 kubelet[2667]: I1123 23:18:07.438818 2667 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:18:07.439145 kubelet[2667]: I1123 23:18:07.438849 2667 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 23 23:18:07.442439 kubelet[2667]: I1123 23:18:07.442413 2667 server.go:1262] "Started kubelet" Nov 23 23:18:07.443614 kubelet[2667]: I1123 23:18:07.442663 2667 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:18:07.443614 kubelet[2667]: I1123 23:18:07.442728 2667 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 23 23:18:07.443614 kubelet[2667]: I1123 23:18:07.442940 2667 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:18:07.443614 kubelet[2667]: I1123 23:18:07.442993 2667 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:18:07.443614 kubelet[2667]: I1123 23:18:07.443043 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:18:07.444128 kubelet[2667]: I1123 23:18:07.443762 2667 server.go:310] "Adding debug handlers to kubelet server" Nov 23 23:18:07.444456 kubelet[2667]: E1123 23:18:07.444212 2667 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:18:07.449013 kubelet[2667]: I1123 23:18:07.444271 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:18:07.449013 kubelet[2667]: I1123 23:18:07.444412 2667 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 23 23:18:07.449013 kubelet[2667]: I1123 23:18:07.444420 2667 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 23:18:07.449013 kubelet[2667]: I1123 23:18:07.445200 2667 reconciler.go:29] "Reconciler: start to sync state" Nov 23 23:18:07.450283 kubelet[2667]: I1123 23:18:07.450134 2667 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:18:07.450347 kubelet[2667]: I1123 23:18:07.450289 2667 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:18:07.464077 kubelet[2667]: E1123 23:18:07.464036 2667 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:18:07.465098 kubelet[2667]: I1123 23:18:07.464079 2667 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 23 23:18:07.469200 kubelet[2667]: I1123 23:18:07.469166 2667 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 23 23:18:07.469200 kubelet[2667]: I1123 23:18:07.469191 2667 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 23 23:18:07.469327 kubelet[2667]: I1123 23:18:07.469211 2667 kubelet.go:2427] "Starting kubelet main sync loop" Nov 23 23:18:07.469327 kubelet[2667]: E1123 23:18:07.469248 2667 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:18:07.469765 kubelet[2667]: I1123 23:18:07.469743 2667 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:18:07.502210 kubelet[2667]: I1123 23:18:07.502130 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:18:07.502210 kubelet[2667]: I1123 23:18:07.502151 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:18:07.502210 kubelet[2667]: I1123 23:18:07.502172 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502310 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502320 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502339 2667 policy_none.go:49] "None policy: Start" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502348 2667 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502358 2667 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502449 2667 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 23 23:18:07.502593 kubelet[2667]: I1123 23:18:07.502456 2667 policy_none.go:47] "Start" Nov 23 23:18:07.507270 kubelet[2667]: E1123 23:18:07.507241 2667 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:18:07.507426 kubelet[2667]: I1123 23:18:07.507409 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:18:07.507469 kubelet[2667]: I1123 23:18:07.507437 2667 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:18:07.508072 kubelet[2667]: I1123 23:18:07.507983 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:18:07.508846 kubelet[2667]: E1123 23:18:07.508825 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:18:07.570831 kubelet[2667]: I1123 23:18:07.570759 2667 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:07.571998 kubelet[2667]: I1123 23:18:07.571974 2667 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:07.572204 kubelet[2667]: I1123 23:18:07.572187 2667 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:07.609801 kubelet[2667]: I1123 23:18:07.609772 2667 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:18:07.617244 kubelet[2667]: I1123 23:18:07.617205 2667 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 23:18:07.617357 kubelet[2667]: I1123 23:18:07.617284 2667 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:18:07.747133 kubelet[2667]: I1123 23:18:07.746990 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:07.747133 kubelet[2667]: I1123 23:18:07.747048 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:07.747133 kubelet[2667]: I1123 23:18:07.747077 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:07.747279 kubelet[2667]: I1123 23:18:07.747138 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:07.747279 kubelet[2667]: I1123 23:18:07.747168 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:07.747279 kubelet[2667]: I1123 23:18:07.747205 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:07.747279 kubelet[2667]: I1123 23:18:07.747221 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:18:07.747279 kubelet[2667]: I1123 23:18:07.747234 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fbb5e1fa99efe4c518ab1448664ae9a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7fbb5e1fa99efe4c518ab1448664ae9a\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:18:07.747389 kubelet[2667]: I1123 23:18:07.747249 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:18:08.438494 kubelet[2667]: I1123 23:18:08.438443 2667 apiserver.go:52] "Watching apiserver" Nov 23 23:18:08.446048 kubelet[2667]: I1123 23:18:08.445989 2667 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 23:18:08.517544 kubelet[2667]: I1123 23:18:08.517269 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5172507880000001 podStartE2EDuration="1.517250788s" podCreationTimestamp="2025-11-23 23:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:08.508795453 +0000 UTC m=+1.136017276" watchObservedRunningTime="2025-11-23 23:18:08.517250788 +0000 UTC m=+1.144472531" Nov 23 23:18:08.527054 kubelet[2667]: I1123 23:18:08.526928 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.526911363 podStartE2EDuration="1.526911363s" podCreationTimestamp="2025-11-23 23:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:08.526807527 +0000 UTC m=+1.154029310" watchObservedRunningTime="2025-11-23 23:18:08.526911363 +0000 UTC m=+1.154133146" Nov 23 23:18:08.527946 kubelet[2667]: I1123 23:18:08.527075 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.527067798 podStartE2EDuration="1.527067798s" podCreationTimestamp="2025-11-23 23:18:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:08.517771408 +0000 UTC m=+1.144993191" watchObservedRunningTime="2025-11-23 23:18:08.527067798 +0000 UTC m=+1.154289581" Nov 23 23:18:11.928251 kubelet[2667]: I1123 23:18:11.928085 2667 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:18:11.928706 kubelet[2667]: I1123 23:18:11.928592 2667 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:18:11.928746 containerd[1506]: time="2025-11-23T23:18:11.928386644Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:18:12.994615 systemd[1]: Created slice kubepods-besteffort-pod28051a1b_49e0_4df5_9560_f20ac021d704.slice - libcontainer container kubepods-besteffort-pod28051a1b_49e0_4df5_9560_f20ac021d704.slice. Nov 23 23:18:13.083130 kubelet[2667]: I1123 23:18:13.082986 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28051a1b-49e0-4df5-9560-f20ac021d704-kube-proxy\") pod \"kube-proxy-8mf7k\" (UID: \"28051a1b-49e0-4df5-9560-f20ac021d704\") " pod="kube-system/kube-proxy-8mf7k" Nov 23 23:18:13.083130 kubelet[2667]: I1123 23:18:13.083047 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28051a1b-49e0-4df5-9560-f20ac021d704-xtables-lock\") pod \"kube-proxy-8mf7k\" (UID: \"28051a1b-49e0-4df5-9560-f20ac021d704\") " pod="kube-system/kube-proxy-8mf7k" Nov 23 23:18:13.083130 kubelet[2667]: I1123 23:18:13.083065 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28051a1b-49e0-4df5-9560-f20ac021d704-lib-modules\") pod \"kube-proxy-8mf7k\" (UID: \"28051a1b-49e0-4df5-9560-f20ac021d704\") " pod="kube-system/kube-proxy-8mf7k" Nov 23 23:18:13.083130 kubelet[2667]: I1123 23:18:13.083083 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwk4l\" (UniqueName: \"kubernetes.io/projected/28051a1b-49e0-4df5-9560-f20ac021d704-kube-api-access-qwk4l\") pod \"kube-proxy-8mf7k\" (UID: \"28051a1b-49e0-4df5-9560-f20ac021d704\") " pod="kube-system/kube-proxy-8mf7k" Nov 23 23:18:13.113127 systemd[1]: Created slice kubepods-besteffort-podf475522e_1c44_484a_a262_c6bcf5bde3ef.slice - libcontainer container kubepods-besteffort-podf475522e_1c44_484a_a262_c6bcf5bde3ef.slice. Nov 23 23:18:13.183915 kubelet[2667]: I1123 23:18:13.183879 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpfv\" (UniqueName: \"kubernetes.io/projected/f475522e-1c44-484a-a262-c6bcf5bde3ef-kube-api-access-jfpfv\") pod \"tigera-operator-65cdcdfd6d-522ts\" (UID: \"f475522e-1c44-484a-a262-c6bcf5bde3ef\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-522ts" Nov 23 23:18:13.184040 kubelet[2667]: I1123 23:18:13.183947 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f475522e-1c44-484a-a262-c6bcf5bde3ef-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-522ts\" (UID: \"f475522e-1c44-484a-a262-c6bcf5bde3ef\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-522ts" Nov 23 23:18:13.309331 containerd[1506]: time="2025-11-23T23:18:13.309199646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mf7k,Uid:28051a1b-49e0-4df5-9560-f20ac021d704,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:13.329198 containerd[1506]: time="2025-11-23T23:18:13.329156283Z" level=info msg="connecting to shim 4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7" address="unix:///run/containerd/s/930afa743c082ab3ad4851b6962a57051ee5c71c7f89c8e024a3d5e9391dc5b4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:13.354195 systemd[1]: Started cri-containerd-4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7.scope - libcontainer container 4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7. Nov 23 23:18:13.377284 containerd[1506]: time="2025-11-23T23:18:13.377230995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mf7k,Uid:28051a1b-49e0-4df5-9560-f20ac021d704,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7\"" Nov 23 23:18:13.384961 containerd[1506]: time="2025-11-23T23:18:13.384922744Z" level=info msg="CreateContainer within sandbox \"4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:18:13.395115 containerd[1506]: time="2025-11-23T23:18:13.394246156Z" level=info msg="Container ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:13.401451 containerd[1506]: time="2025-11-23T23:18:13.401389615Z" level=info msg="CreateContainer within sandbox \"4bc31ec8e16bc00a6b1a2ee73346d54f234368adace00b6b388e1a914f801ae7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5\"" Nov 23 23:18:13.401989 containerd[1506]: time="2025-11-23T23:18:13.401934143Z" level=info msg="StartContainer for \"ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5\"" Nov 23 23:18:13.403972 containerd[1506]: time="2025-11-23T23:18:13.403880253Z" level=info msg="connecting to shim ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5" address="unix:///run/containerd/s/930afa743c082ab3ad4851b6962a57051ee5c71c7f89c8e024a3d5e9391dc5b4" protocol=ttrpc version=3 Nov 23 23:18:13.419192 containerd[1506]: time="2025-11-23T23:18:13.419158655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-522ts,Uid:f475522e-1c44-484a-a262-c6bcf5bde3ef,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:18:13.433173 systemd[1]: Started cri-containerd-ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5.scope - libcontainer container ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5. Nov 23 23:18:13.439428 containerd[1506]: time="2025-11-23T23:18:13.439377992Z" level=info msg="connecting to shim d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13" address="unix:///run/containerd/s/dc0f38071c89488f388f91794237b8ad929e5f98001f90d293590aef37eb518c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:13.458156 systemd[1]: Started cri-containerd-d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13.scope - libcontainer container d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13. Nov 23 23:18:13.492421 containerd[1506]: time="2025-11-23T23:18:13.492376548Z" level=info msg="StartContainer for \"ab79ea019100a8f7112b0a4d427ea56a758944f1ecedc4bdb6429834cdb32ef5\" returns successfully" Nov 23 23:18:13.497693 containerd[1506]: time="2025-11-23T23:18:13.497637372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-522ts,Uid:f475522e-1c44-484a-a262-c6bcf5bde3ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13\"" Nov 23 23:18:13.502130 containerd[1506]: time="2025-11-23T23:18:13.502036299Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:18:13.521630 kubelet[2667]: I1123 23:18:13.521560 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mf7k" podStartSLOduration=1.521544499 podStartE2EDuration="1.521544499s" podCreationTimestamp="2025-11-23 23:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:13.520922129 +0000 UTC m=+6.148143912" watchObservedRunningTime="2025-11-23 23:18:13.521544499 +0000 UTC m=+6.148766242" Nov 23 23:18:14.199550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347746350.mount: Deactivated successfully. Nov 23 23:18:14.604331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267996323.mount: Deactivated successfully. Nov 23 23:18:14.946019 containerd[1506]: time="2025-11-23T23:18:14.945966005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:14.946959 containerd[1506]: time="2025-11-23T23:18:14.946732625Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:18:14.947779 containerd[1506]: time="2025-11-23T23:18:14.947752931Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:14.957575 containerd[1506]: time="2025-11-23T23:18:14.957532021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:14.958384 containerd[1506]: time="2025-11-23T23:18:14.958356030Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.456278991s" Nov 23 23:18:14.958483 containerd[1506]: time="2025-11-23T23:18:14.958468446Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:18:14.963599 containerd[1506]: time="2025-11-23T23:18:14.963566174Z" level=info msg="CreateContainer within sandbox \"d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:18:14.971939 containerd[1506]: time="2025-11-23T23:18:14.971904590Z" level=info msg="Container 898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:14.978592 containerd[1506]: time="2025-11-23T23:18:14.978558170Z" level=info msg="CreateContainer within sandbox \"d110d4ae6a7b2c6c48b023e528019091b8c2a6763957bc0eef4907edc97bbd13\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a\"" Nov 23 23:18:14.979119 containerd[1506]: time="2025-11-23T23:18:14.979093915Z" level=info msg="StartContainer for \"898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a\"" Nov 23 23:18:14.979973 containerd[1506]: time="2025-11-23T23:18:14.979951101Z" level=info msg="connecting to shim 898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a" address="unix:///run/containerd/s/dc0f38071c89488f388f91794237b8ad929e5f98001f90d293590aef37eb518c" protocol=ttrpc version=3 Nov 23 23:18:15.010198 systemd[1]: Started cri-containerd-898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a.scope - libcontainer container 898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a. Nov 23 23:18:15.039672 containerd[1506]: time="2025-11-23T23:18:15.039629782Z" level=info msg="StartContainer for \"898525a3a695ad75b9c663afdcb0d6fc8325ffb85241925ed5d8205436afc99a\" returns successfully" Nov 23 23:18:19.304511 kubelet[2667]: I1123 23:18:19.304294 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-522ts" podStartSLOduration=4.845485086 podStartE2EDuration="6.30427817s" podCreationTimestamp="2025-11-23 23:18:13 +0000 UTC" firstStartedPulling="2025-11-23 23:18:13.500655768 +0000 UTC m=+6.127877551" lastFinishedPulling="2025-11-23 23:18:14.959448852 +0000 UTC m=+7.586670635" observedRunningTime="2025-11-23 23:18:15.517507186 +0000 UTC m=+8.144728929" watchObservedRunningTime="2025-11-23 23:18:19.30427817 +0000 UTC m=+11.931499953" Nov 23 23:18:20.374306 sudo[1721]: pam_unix(sudo:session): session closed for user root Nov 23 23:18:20.375573 sshd[1720]: Connection closed by 10.0.0.1 port 37308 Nov 23 23:18:20.376153 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:20.381690 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:37308.service: Deactivated successfully. Nov 23 23:18:20.389634 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:18:20.393109 systemd[1]: session-7.scope: Consumed 7.483s CPU time, 223.9M memory peak. Nov 23 23:18:20.396510 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:18:20.402621 systemd-logind[1486]: Removed session 7. Nov 23 23:18:20.459118 update_engine[1488]: I20251123 23:18:20.459030 1488 update_attempter.cc:509] Updating boot flags... Nov 23 23:18:27.746426 systemd[1]: Created slice kubepods-besteffort-podc0ba77af_ffa9_44d9_9a42_db60fc95bbca.slice - libcontainer container kubepods-besteffort-podc0ba77af_ffa9_44d9_9a42_db60fc95bbca.slice. Nov 23 23:18:27.781262 kubelet[2667]: I1123 23:18:27.781206 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c0ba77af-ffa9-44d9-9a42-db60fc95bbca-typha-certs\") pod \"calico-typha-6bcc7fd586-xlbgs\" (UID: \"c0ba77af-ffa9-44d9-9a42-db60fc95bbca\") " pod="calico-system/calico-typha-6bcc7fd586-xlbgs" Nov 23 23:18:27.781262 kubelet[2667]: I1123 23:18:27.781257 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdk4\" (UniqueName: \"kubernetes.io/projected/c0ba77af-ffa9-44d9-9a42-db60fc95bbca-kube-api-access-hqdk4\") pod \"calico-typha-6bcc7fd586-xlbgs\" (UID: \"c0ba77af-ffa9-44d9-9a42-db60fc95bbca\") " pod="calico-system/calico-typha-6bcc7fd586-xlbgs" Nov 23 23:18:27.781625 kubelet[2667]: I1123 23:18:27.781277 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0ba77af-ffa9-44d9-9a42-db60fc95bbca-tigera-ca-bundle\") pod \"calico-typha-6bcc7fd586-xlbgs\" (UID: \"c0ba77af-ffa9-44d9-9a42-db60fc95bbca\") " pod="calico-system/calico-typha-6bcc7fd586-xlbgs" Nov 23 23:18:27.949472 systemd[1]: Created slice kubepods-besteffort-pod54c26791_4037_4582_9658_1ac0b12d01c4.slice - libcontainer container kubepods-besteffort-pod54c26791_4037_4582_9658_1ac0b12d01c4.slice. Nov 23 23:18:27.982121 kubelet[2667]: I1123 23:18:27.982062 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-lib-modules\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982121 kubelet[2667]: I1123 23:18:27.982104 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-policysync\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982290 kubelet[2667]: I1123 23:18:27.982181 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-cni-net-dir\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982290 kubelet[2667]: I1123 23:18:27.982206 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-flexvol-driver-host\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982290 kubelet[2667]: I1123 23:18:27.982223 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zr7\" (UniqueName: \"kubernetes.io/projected/54c26791-4037-4582-9658-1ac0b12d01c4-kube-api-access-82zr7\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982290 kubelet[2667]: I1123 23:18:27.982250 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54c26791-4037-4582-9658-1ac0b12d01c4-tigera-ca-bundle\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982290 kubelet[2667]: I1123 23:18:27.982265 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-var-lib-calico\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982399 kubelet[2667]: I1123 23:18:27.982283 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-cni-log-dir\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982399 kubelet[2667]: I1123 23:18:27.982297 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54c26791-4037-4582-9658-1ac0b12d01c4-node-certs\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982399 kubelet[2667]: I1123 23:18:27.982322 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-xtables-lock\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982399 kubelet[2667]: I1123 23:18:27.982342 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-cni-bin-dir\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:27.982480 kubelet[2667]: I1123 23:18:27.982382 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54c26791-4037-4582-9658-1ac0b12d01c4-var-run-calico\") pod \"calico-node-chz59\" (UID: \"54c26791-4037-4582-9658-1ac0b12d01c4\") " pod="calico-system/calico-node-chz59" Nov 23 23:18:28.056499 containerd[1506]: time="2025-11-23T23:18:28.055838872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bcc7fd586-xlbgs,Uid:c0ba77af-ffa9-44d9-9a42-db60fc95bbca,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:28.087796 kubelet[2667]: E1123 23:18:28.087761 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.087796 kubelet[2667]: W1123 23:18:28.087787 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.095192 kubelet[2667]: E1123 23:18:28.095126 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.097027 kubelet[2667]: E1123 23:18:28.096735 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.097027 kubelet[2667]: W1123 23:18:28.096758 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.097027 kubelet[2667]: E1123 23:18:28.096787 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.097891 kubelet[2667]: E1123 23:18:28.097838 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.097891 kubelet[2667]: W1123 23:18:28.097861 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.097891 kubelet[2667]: E1123 23:18:28.097884 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.098242 kubelet[2667]: E1123 23:18:28.098090 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.098242 kubelet[2667]: W1123 23:18:28.098104 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.098242 kubelet[2667]: E1123 23:18:28.098114 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.098343 kubelet[2667]: E1123 23:18:28.098331 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.098343 kubelet[2667]: W1123 23:18:28.098340 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.098385 kubelet[2667]: E1123 23:18:28.098355 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.103480 kubelet[2667]: E1123 23:18:28.103446 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.103480 kubelet[2667]: W1123 23:18:28.103473 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.103854 kubelet[2667]: E1123 23:18:28.103495 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.108605 kubelet[2667]: E1123 23:18:28.107790 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.108605 kubelet[2667]: W1123 23:18:28.107814 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.108605 kubelet[2667]: E1123 23:18:28.107835 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.112234 kubelet[2667]: E1123 23:18:28.112149 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.112234 kubelet[2667]: W1123 23:18:28.112176 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.112234 kubelet[2667]: E1123 23:18:28.112198 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.112718 kubelet[2667]: E1123 23:18:28.112679 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.112718 kubelet[2667]: W1123 23:18:28.112693 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.112718 kubelet[2667]: E1123 23:18:28.112705 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.113128 kubelet[2667]: E1123 23:18:28.113059 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.113128 kubelet[2667]: W1123 23:18:28.113072 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.113128 kubelet[2667]: E1123 23:18:28.113082 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.113495 kubelet[2667]: E1123 23:18:28.113478 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.113576 kubelet[2667]: W1123 23:18:28.113564 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.113633 kubelet[2667]: E1123 23:18:28.113622 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.113862 kubelet[2667]: E1123 23:18:28.113850 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.114650 kubelet[2667]: W1123 23:18:28.113959 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.114650 kubelet[2667]: E1123 23:18:28.113978 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.115787 kubelet[2667]: E1123 23:18:28.115689 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.115787 kubelet[2667]: W1123 23:18:28.115709 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.115787 kubelet[2667]: E1123 23:18:28.115723 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.116421 kubelet[2667]: E1123 23:18:28.116370 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.116811 kubelet[2667]: W1123 23:18:28.116791 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.116891 kubelet[2667]: E1123 23:18:28.116866 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.117728 kubelet[2667]: E1123 23:18:28.117709 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.117941 kubelet[2667]: W1123 23:18:28.117806 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.117941 kubelet[2667]: E1123 23:18:28.117824 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.118776 kubelet[2667]: E1123 23:18:28.118760 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.118887 kubelet[2667]: W1123 23:18:28.118859 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.118941 kubelet[2667]: E1123 23:18:28.118931 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.129326 kubelet[2667]: E1123 23:18:28.129280 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:28.141174 containerd[1506]: time="2025-11-23T23:18:28.141121813Z" level=info msg="connecting to shim bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7" address="unix:///run/containerd/s/36860618b7f328ae97c759909bbb03e27d953fdea62b721057726e0150fb4240" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:28.175664 kubelet[2667]: E1123 23:18:28.175333 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.175664 kubelet[2667]: W1123 23:18:28.175372 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.175664 kubelet[2667]: E1123 23:18:28.175396 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.175664 kubelet[2667]: E1123 23:18:28.175565 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.175664 kubelet[2667]: W1123 23:18:28.175573 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.175664 kubelet[2667]: E1123 23:18:28.175614 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176171 kubelet[2667]: E1123 23:18:28.175754 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176171 kubelet[2667]: W1123 23:18:28.175763 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176171 kubelet[2667]: E1123 23:18:28.175787 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176171 kubelet[2667]: E1123 23:18:28.175956 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176171 kubelet[2667]: W1123 23:18:28.175964 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176171 kubelet[2667]: E1123 23:18:28.175974 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176347 kubelet[2667]: E1123 23:18:28.176268 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176347 kubelet[2667]: W1123 23:18:28.176279 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176347 kubelet[2667]: E1123 23:18:28.176296 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176533 kubelet[2667]: E1123 23:18:28.176430 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176533 kubelet[2667]: W1123 23:18:28.176439 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176533 kubelet[2667]: E1123 23:18:28.176486 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176673 kubelet[2667]: E1123 23:18:28.176656 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176673 kubelet[2667]: W1123 23:18:28.176668 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176747 kubelet[2667]: E1123 23:18:28.176676 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.176834 kubelet[2667]: E1123 23:18:28.176819 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.176834 kubelet[2667]: W1123 23:18:28.176830 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.176907 kubelet[2667]: E1123 23:18:28.176838 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.177018 kubelet[2667]: E1123 23:18:28.176994 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.177462 kubelet[2667]: W1123 23:18:28.177056 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.177462 kubelet[2667]: E1123 23:18:28.177110 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.177462 kubelet[2667]: E1123 23:18:28.177307 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.177462 kubelet[2667]: W1123 23:18:28.177316 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.177462 kubelet[2667]: E1123 23:18:28.177325 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.177721 kubelet[2667]: E1123 23:18:28.177495 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.177721 kubelet[2667]: W1123 23:18:28.177505 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.177721 kubelet[2667]: E1123 23:18:28.177541 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.177808 kubelet[2667]: E1123 23:18:28.177735 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.177808 kubelet[2667]: W1123 23:18:28.177744 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.177808 kubelet[2667]: E1123 23:18:28.177753 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.177942 kubelet[2667]: E1123 23:18:28.177911 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178048 kubelet[2667]: W1123 23:18:28.177938 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178048 kubelet[2667]: E1123 23:18:28.177966 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.178232 kubelet[2667]: E1123 23:18:28.178161 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178232 kubelet[2667]: W1123 23:18:28.178174 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178232 kubelet[2667]: E1123 23:18:28.178183 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.178335 kubelet[2667]: E1123 23:18:28.178320 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178335 kubelet[2667]: W1123 23:18:28.178331 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178399 kubelet[2667]: E1123 23:18:28.178340 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.178478 kubelet[2667]: E1123 23:18:28.178459 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178478 kubelet[2667]: W1123 23:18:28.178472 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178528 kubelet[2667]: E1123 23:18:28.178481 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.178679 kubelet[2667]: E1123 23:18:28.178661 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178679 kubelet[2667]: W1123 23:18:28.178674 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178734 kubelet[2667]: E1123 23:18:28.178684 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.178863 kubelet[2667]: E1123 23:18:28.178846 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.178863 kubelet[2667]: W1123 23:18:28.178856 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.178863 kubelet[2667]: E1123 23:18:28.178864 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.179278 kubelet[2667]: E1123 23:18:28.179257 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.179278 kubelet[2667]: W1123 23:18:28.179272 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.179363 kubelet[2667]: E1123 23:18:28.179284 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.179467 kubelet[2667]: E1123 23:18:28.179445 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.179467 kubelet[2667]: W1123 23:18:28.179457 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.179467 kubelet[2667]: E1123 23:18:28.179465 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.184433 kubelet[2667]: E1123 23:18:28.184405 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.184433 kubelet[2667]: W1123 23:18:28.184426 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.184681 kubelet[2667]: E1123 23:18:28.184445 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.184681 kubelet[2667]: I1123 23:18:28.184472 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a03beb28-b117-427c-9d7b-d6307465538e-kubelet-dir\") pod \"csi-node-driver-mlht4\" (UID: \"a03beb28-b117-427c-9d7b-d6307465538e\") " pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:28.184681 kubelet[2667]: E1123 23:18:28.184667 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.184900 kubelet[2667]: W1123 23:18:28.184678 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.184900 kubelet[2667]: E1123 23:18:28.184695 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.184900 kubelet[2667]: I1123 23:18:28.184713 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a03beb28-b117-427c-9d7b-d6307465538e-registration-dir\") pod \"csi-node-driver-mlht4\" (UID: \"a03beb28-b117-427c-9d7b-d6307465538e\") " pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:28.184965 kubelet[2667]: E1123 23:18:28.184908 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.184965 kubelet[2667]: W1123 23:18:28.184920 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.184965 kubelet[2667]: E1123 23:18:28.184929 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.184965 kubelet[2667]: I1123 23:18:28.184944 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a03beb28-b117-427c-9d7b-d6307465538e-varrun\") pod \"csi-node-driver-mlht4\" (UID: \"a03beb28-b117-427c-9d7b-d6307465538e\") " pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:28.185288 kubelet[2667]: E1123 23:18:28.185268 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.185333 kubelet[2667]: W1123 23:18:28.185290 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.185333 kubelet[2667]: E1123 23:18:28.185302 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.186197 kubelet[2667]: E1123 23:18:28.185742 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.186197 kubelet[2667]: W1123 23:18:28.186050 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.186197 kubelet[2667]: E1123 23:18:28.186071 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.186197 kubelet[2667]: I1123 23:18:28.186016 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrtl\" (UniqueName: \"kubernetes.io/projected/a03beb28-b117-427c-9d7b-d6307465538e-kube-api-access-gbrtl\") pod \"csi-node-driver-mlht4\" (UID: \"a03beb28-b117-427c-9d7b-d6307465538e\") " pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:28.187176 kubelet[2667]: E1123 23:18:28.187156 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.187323 kubelet[2667]: W1123 23:18:28.187302 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.187546 kubelet[2667]: E1123 23:18:28.187476 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.189953 kubelet[2667]: E1123 23:18:28.189920 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.190143 kubelet[2667]: W1123 23:18:28.190032 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.190143 kubelet[2667]: E1123 23:18:28.190057 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.191017 kubelet[2667]: E1123 23:18:28.190896 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.191436 kubelet[2667]: W1123 23:18:28.191218 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.191436 kubelet[2667]: E1123 23:18:28.191246 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.191912 kubelet[2667]: E1123 23:18:28.191875 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.192013 kubelet[2667]: W1123 23:18:28.191974 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.192102 kubelet[2667]: E1123 23:18:28.192082 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.192581 kubelet[2667]: E1123 23:18:28.192562 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.192835 kubelet[2667]: W1123 23:18:28.192816 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.192923 kubelet[2667]: E1123 23:18:28.192912 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.193105 kubelet[2667]: I1123 23:18:28.193069 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a03beb28-b117-427c-9d7b-d6307465538e-socket-dir\") pod \"csi-node-driver-mlht4\" (UID: \"a03beb28-b117-427c-9d7b-d6307465538e\") " pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:28.193606 kubelet[2667]: E1123 23:18:28.193584 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.193767 kubelet[2667]: W1123 23:18:28.193665 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.193767 kubelet[2667]: E1123 23:18:28.193699 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.194164 kubelet[2667]: E1123 23:18:28.194058 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.194264 kubelet[2667]: W1123 23:18:28.194250 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.194350 kubelet[2667]: E1123 23:18:28.194338 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.196131 kubelet[2667]: E1123 23:18:28.196105 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.196228 kubelet[2667]: W1123 23:18:28.196215 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.196291 kubelet[2667]: E1123 23:18:28.196270 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.197048 kubelet[2667]: E1123 23:18:28.196953 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.197048 kubelet[2667]: W1123 23:18:28.196969 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.197048 kubelet[2667]: E1123 23:18:28.196986 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.198635 kubelet[2667]: E1123 23:18:28.198601 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.198882 kubelet[2667]: W1123 23:18:28.198805 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.199317 kubelet[2667]: E1123 23:18:28.199141 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.206243 systemd[1]: Started cri-containerd-bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7.scope - libcontainer container bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7. Nov 23 23:18:28.251115 containerd[1506]: time="2025-11-23T23:18:28.251068552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bcc7fd586-xlbgs,Uid:c0ba77af-ffa9-44d9-9a42-db60fc95bbca,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7\"" Nov 23 23:18:28.252821 containerd[1506]: time="2025-11-23T23:18:28.252785857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:18:28.257821 containerd[1506]: time="2025-11-23T23:18:28.257782501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-chz59,Uid:54c26791-4037-4582-9658-1ac0b12d01c4,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:28.295662 containerd[1506]: time="2025-11-23T23:18:28.295611104Z" level=info msg="connecting to shim c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209" address="unix:///run/containerd/s/f7984adcf69f6fb2503e3425b305182df30fb189cf36a638a7a4fb9a0641616b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:28.297933 kubelet[2667]: E1123 23:18:28.297887 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.297933 kubelet[2667]: W1123 23:18:28.297923 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.297933 kubelet[2667]: E1123 23:18:28.297944 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.299233 kubelet[2667]: E1123 23:18:28.299196 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.299233 kubelet[2667]: W1123 23:18:28.299217 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.299233 kubelet[2667]: E1123 23:18:28.299234 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.302137 kubelet[2667]: E1123 23:18:28.302099 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.302137 kubelet[2667]: W1123 23:18:28.302121 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.302137 kubelet[2667]: E1123 23:18:28.302140 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.302714 kubelet[2667]: E1123 23:18:28.302491 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.302714 kubelet[2667]: W1123 23:18:28.302542 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.302714 kubelet[2667]: E1123 23:18:28.302556 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.303555 kubelet[2667]: E1123 23:18:28.303267 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.303555 kubelet[2667]: W1123 23:18:28.303286 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.303555 kubelet[2667]: E1123 23:18:28.303345 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.303555 kubelet[2667]: E1123 23:18:28.303558 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.303703 kubelet[2667]: W1123 23:18:28.303572 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.303703 kubelet[2667]: E1123 23:18:28.303583 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.304685 kubelet[2667]: E1123 23:18:28.304663 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.304685 kubelet[2667]: W1123 23:18:28.304682 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.304798 kubelet[2667]: E1123 23:18:28.304698 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.304929 kubelet[2667]: E1123 23:18:28.304915 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.304929 kubelet[2667]: W1123 23:18:28.304927 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.305037 kubelet[2667]: E1123 23:18:28.304937 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.305174 kubelet[2667]: E1123 23:18:28.305147 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.305174 kubelet[2667]: W1123 23:18:28.305162 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.305174 kubelet[2667]: E1123 23:18:28.305178 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.305999 kubelet[2667]: E1123 23:18:28.305729 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.305999 kubelet[2667]: W1123 23:18:28.305774 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.305999 kubelet[2667]: E1123 23:18:28.305791 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.305999 kubelet[2667]: E1123 23:18:28.305985 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.305999 kubelet[2667]: W1123 23:18:28.305994 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.306364 kubelet[2667]: E1123 23:18:28.306045 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.306739 kubelet[2667]: E1123 23:18:28.306491 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.306739 kubelet[2667]: W1123 23:18:28.306508 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.306739 kubelet[2667]: E1123 23:18:28.306519 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.306739 kubelet[2667]: E1123 23:18:28.306719 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.306739 kubelet[2667]: W1123 23:18:28.306728 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.306739 kubelet[2667]: E1123 23:18:28.306737 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.306973 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.308460 kubelet[2667]: W1123 23:18:28.306985 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.306995 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.307434 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.308460 kubelet[2667]: W1123 23:18:28.307449 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.307469 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.307900 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.308460 kubelet[2667]: W1123 23:18:28.307922 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.308460 kubelet[2667]: E1123 23:18:28.307936 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.309316 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.310972 kubelet[2667]: W1123 23:18:28.309337 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.309361 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.309964 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.310972 kubelet[2667]: W1123 23:18:28.309980 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.309993 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.310469 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.310972 kubelet[2667]: W1123 23:18:28.310481 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.310972 kubelet[2667]: E1123 23:18:28.310493 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.312064 kubelet[2667]: E1123 23:18:28.311338 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.312064 kubelet[2667]: W1123 23:18:28.311355 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.312064 kubelet[2667]: E1123 23:18:28.311369 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.312587 kubelet[2667]: E1123 23:18:28.312407 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.312587 kubelet[2667]: W1123 23:18:28.312552 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.312587 kubelet[2667]: E1123 23:18:28.312569 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.313326 kubelet[2667]: E1123 23:18:28.313308 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.313536 kubelet[2667]: W1123 23:18:28.313395 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.313619 kubelet[2667]: E1123 23:18:28.313601 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.315678 kubelet[2667]: E1123 23:18:28.315644 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.316171 kubelet[2667]: W1123 23:18:28.316077 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.316171 kubelet[2667]: E1123 23:18:28.316108 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.317441 kubelet[2667]: E1123 23:18:28.317277 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.317441 kubelet[2667]: W1123 23:18:28.317298 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.317441 kubelet[2667]: E1123 23:18:28.317314 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.319198 kubelet[2667]: E1123 23:18:28.319176 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.319198 kubelet[2667]: W1123 23:18:28.319195 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.319316 kubelet[2667]: E1123 23:18:28.319214 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.341228 kubelet[2667]: E1123 23:18:28.341171 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:28.341228 kubelet[2667]: W1123 23:18:28.341198 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:28.341474 kubelet[2667]: E1123 23:18:28.341322 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:28.369253 systemd[1]: Started cri-containerd-c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209.scope - libcontainer container c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209. Nov 23 23:18:28.403099 containerd[1506]: time="2025-11-23T23:18:28.403057860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-chz59,Uid:54c26791-4037-4582-9658-1ac0b12d01c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\"" Nov 23 23:18:29.257951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171497565.mount: Deactivated successfully. Nov 23 23:18:29.901542 containerd[1506]: time="2025-11-23T23:18:29.900337784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:29.901542 containerd[1506]: time="2025-11-23T23:18:29.901035476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:18:29.901983 containerd[1506]: time="2025-11-23T23:18:29.901810142Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:29.904478 containerd[1506]: time="2025-11-23T23:18:29.904413312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:29.905279 containerd[1506]: time="2025-11-23T23:18:29.905234787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.652404561s" Nov 23 23:18:29.905279 containerd[1506]: time="2025-11-23T23:18:29.905284997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:18:29.907263 containerd[1506]: time="2025-11-23T23:18:29.906550995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:18:29.924035 containerd[1506]: time="2025-11-23T23:18:29.923970357Z" level=info msg="CreateContainer within sandbox \"bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:18:29.934442 containerd[1506]: time="2025-11-23T23:18:29.934395682Z" level=info msg="Container 75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:29.946183 containerd[1506]: time="2025-11-23T23:18:29.946136854Z" level=info msg="CreateContainer within sandbox \"bbf5316c738968b81c2e3532b8b6829138d03dd96b448e69eba37ddf009044c7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10\"" Nov 23 23:18:29.946890 containerd[1506]: time="2025-11-23T23:18:29.946856670Z" level=info msg="StartContainer for \"75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10\"" Nov 23 23:18:29.949819 containerd[1506]: time="2025-11-23T23:18:29.949783381Z" level=info msg="connecting to shim 75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10" address="unix:///run/containerd/s/36860618b7f328ae97c759909bbb03e27d953fdea62b721057726e0150fb4240" protocol=ttrpc version=3 Nov 23 23:18:29.977251 systemd[1]: Started cri-containerd-75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10.scope - libcontainer container 75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10. Nov 23 23:18:30.028272 containerd[1506]: time="2025-11-23T23:18:30.028209839Z" level=info msg="StartContainer for \"75246afe01362aa6e49b6700f906b6890ade8c4691834e6346f5939bc8ca6d10\" returns successfully" Nov 23 23:18:30.470032 kubelet[2667]: E1123 23:18:30.469958 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:30.566015 kubelet[2667]: I1123 23:18:30.565946 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bcc7fd586-xlbgs" podStartSLOduration=1.911842825 podStartE2EDuration="3.56592675s" podCreationTimestamp="2025-11-23 23:18:27 +0000 UTC" firstStartedPulling="2025-11-23 23:18:28.252279555 +0000 UTC m=+20.879501298" lastFinishedPulling="2025-11-23 23:18:29.90636344 +0000 UTC m=+22.533585223" observedRunningTime="2025-11-23 23:18:30.564760184 +0000 UTC m=+23.191981967" watchObservedRunningTime="2025-11-23 23:18:30.56592675 +0000 UTC m=+23.193148533" Nov 23 23:18:30.598104 kubelet[2667]: E1123 23:18:30.597991 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.598104 kubelet[2667]: W1123 23:18:30.598044 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.598104 kubelet[2667]: E1123 23:18:30.598068 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.598548 kubelet[2667]: E1123 23:18:30.598525 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.598718 kubelet[2667]: W1123 23:18:30.598648 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.598718 kubelet[2667]: E1123 23:18:30.598671 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.599129 kubelet[2667]: E1123 23:18:30.599074 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.599129 kubelet[2667]: W1123 23:18:30.599090 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.599483 kubelet[2667]: E1123 23:18:30.599102 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.599597 kubelet[2667]: E1123 23:18:30.599582 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.599673 kubelet[2667]: W1123 23:18:30.599662 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.599813 kubelet[2667]: E1123 23:18:30.599714 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.600051 kubelet[2667]: E1123 23:18:30.600035 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.600202 kubelet[2667]: W1123 23:18:30.600127 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.600202 kubelet[2667]: E1123 23:18:30.600182 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.600579 kubelet[2667]: E1123 23:18:30.600564 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.600717 kubelet[2667]: W1123 23:18:30.600644 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.600717 kubelet[2667]: E1123 23:18:30.600662 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.600943 kubelet[2667]: E1123 23:18:30.600903 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.600943 kubelet[2667]: W1123 23:18:30.600917 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.600943 kubelet[2667]: E1123 23:18:30.600927 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.601317 kubelet[2667]: E1123 23:18:30.601272 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.601317 kubelet[2667]: W1123 23:18:30.601286 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.601317 kubelet[2667]: E1123 23:18:30.601297 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.601672 kubelet[2667]: E1123 23:18:30.601636 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.601672 kubelet[2667]: W1123 23:18:30.601649 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.601672 kubelet[2667]: E1123 23:18:30.601659 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.601971 kubelet[2667]: E1123 23:18:30.601940 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.601971 kubelet[2667]: W1123 23:18:30.601961 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.601971 kubelet[2667]: E1123 23:18:30.601971 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.602150 kubelet[2667]: E1123 23:18:30.602135 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.602150 kubelet[2667]: W1123 23:18:30.602146 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.602200 kubelet[2667]: E1123 23:18:30.602155 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.602301 kubelet[2667]: E1123 23:18:30.602289 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.602301 kubelet[2667]: W1123 23:18:30.602299 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.602301 kubelet[2667]: E1123 23:18:30.602307 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.602455 kubelet[2667]: E1123 23:18:30.602441 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.602969 kubelet[2667]: W1123 23:18:30.602460 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.602969 kubelet[2667]: E1123 23:18:30.602471 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.602969 kubelet[2667]: E1123 23:18:30.602587 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.602969 kubelet[2667]: W1123 23:18:30.602593 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.602969 kubelet[2667]: E1123 23:18:30.602600 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.602969 kubelet[2667]: E1123 23:18:30.602712 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.602969 kubelet[2667]: W1123 23:18:30.602719 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.602969 kubelet[2667]: E1123 23:18:30.602726 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.623986 kubelet[2667]: E1123 23:18:30.623938 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.623986 kubelet[2667]: W1123 23:18:30.623966 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.623986 kubelet[2667]: E1123 23:18:30.623986 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.624186 kubelet[2667]: E1123 23:18:30.624178 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.624210 kubelet[2667]: W1123 23:18:30.624187 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.624210 kubelet[2667]: E1123 23:18:30.624206 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.624460 kubelet[2667]: E1123 23:18:30.624438 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.624460 kubelet[2667]: W1123 23:18:30.624457 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.624507 kubelet[2667]: E1123 23:18:30.624471 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.624663 kubelet[2667]: E1123 23:18:30.624651 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.624687 kubelet[2667]: W1123 23:18:30.624663 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.624687 kubelet[2667]: E1123 23:18:30.624673 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.624826 kubelet[2667]: E1123 23:18:30.624814 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.624857 kubelet[2667]: W1123 23:18:30.624825 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.624857 kubelet[2667]: E1123 23:18:30.624834 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.625052 kubelet[2667]: E1123 23:18:30.624995 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.625052 kubelet[2667]: W1123 23:18:30.625019 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.625052 kubelet[2667]: E1123 23:18:30.625028 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.625279 kubelet[2667]: E1123 23:18:30.625258 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.625279 kubelet[2667]: W1123 23:18:30.625276 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.625333 kubelet[2667]: E1123 23:18:30.625288 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.625526 kubelet[2667]: E1123 23:18:30.625510 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.625526 kubelet[2667]: W1123 23:18:30.625523 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.625577 kubelet[2667]: E1123 23:18:30.625534 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.625723 kubelet[2667]: E1123 23:18:30.625710 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.625744 kubelet[2667]: W1123 23:18:30.625722 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.625744 kubelet[2667]: E1123 23:18:30.625731 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.625900 kubelet[2667]: E1123 23:18:30.625888 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.625920 kubelet[2667]: W1123 23:18:30.625900 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.625920 kubelet[2667]: E1123 23:18:30.625908 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.626097 kubelet[2667]: E1123 23:18:30.626084 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.626128 kubelet[2667]: W1123 23:18:30.626097 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.626128 kubelet[2667]: E1123 23:18:30.626106 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.626255 kubelet[2667]: E1123 23:18:30.626242 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.626281 kubelet[2667]: W1123 23:18:30.626254 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.626281 kubelet[2667]: E1123 23:18:30.626263 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.626467 kubelet[2667]: E1123 23:18:30.626452 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.626467 kubelet[2667]: W1123 23:18:30.626464 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.626518 kubelet[2667]: E1123 23:18:30.626473 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.626748 kubelet[2667]: E1123 23:18:30.626723 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.626770 kubelet[2667]: W1123 23:18:30.626747 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.626846 kubelet[2667]: E1123 23:18:30.626829 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.628036 kubelet[2667]: E1123 23:18:30.627386 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.628036 kubelet[2667]: W1123 23:18:30.627406 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.628036 kubelet[2667]: E1123 23:18:30.627422 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.628036 kubelet[2667]: E1123 23:18:30.627657 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.628036 kubelet[2667]: W1123 23:18:30.627667 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.628036 kubelet[2667]: E1123 23:18:30.627678 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.628405 kubelet[2667]: E1123 23:18:30.628223 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.628405 kubelet[2667]: W1123 23:18:30.628237 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.628405 kubelet[2667]: E1123 23:18:30.628255 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.633878 kubelet[2667]: E1123 23:18:30.633784 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:18:30.633878 kubelet[2667]: W1123 23:18:30.633808 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:18:30.633878 kubelet[2667]: E1123 23:18:30.633831 2667 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:18:30.958338 containerd[1506]: time="2025-11-23T23:18:30.958258858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:30.958848 containerd[1506]: time="2025-11-23T23:18:30.958797353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:18:30.959857 containerd[1506]: time="2025-11-23T23:18:30.959808252Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:30.962191 containerd[1506]: time="2025-11-23T23:18:30.962149785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:30.963053 containerd[1506]: time="2025-11-23T23:18:30.962761934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.056119561s" Nov 23 23:18:30.963053 containerd[1506]: time="2025-11-23T23:18:30.962790459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:18:30.968138 containerd[1506]: time="2025-11-23T23:18:30.968067711Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:18:30.977377 containerd[1506]: time="2025-11-23T23:18:30.976259278Z" level=info msg="Container f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:30.985905 containerd[1506]: time="2025-11-23T23:18:30.985856413Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65\"" Nov 23 23:18:30.986611 containerd[1506]: time="2025-11-23T23:18:30.986484244Z" level=info msg="StartContainer for \"f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65\"" Nov 23 23:18:30.988393 containerd[1506]: time="2025-11-23T23:18:30.988347093Z" level=info msg="connecting to shim f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65" address="unix:///run/containerd/s/f7984adcf69f6fb2503e3425b305182df30fb189cf36a638a7a4fb9a0641616b" protocol=ttrpc version=3 Nov 23 23:18:31.012245 systemd[1]: Started cri-containerd-f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65.scope - libcontainer container f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65. Nov 23 23:18:31.088819 containerd[1506]: time="2025-11-23T23:18:31.088756383Z" level=info msg="StartContainer for \"f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65\" returns successfully" Nov 23 23:18:31.105318 systemd[1]: cri-containerd-f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65.scope: Deactivated successfully. Nov 23 23:18:31.139827 containerd[1506]: time="2025-11-23T23:18:31.139762030Z" level=info msg="received container exit event container_id:\"f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65\" id:\"f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65\" pid:3389 exited_at:{seconds:1763939911 nanos:127922309}" Nov 23 23:18:31.189208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6126bd075dbe9e3902579393789609b2c7e8fcfda2a7d948a9d5647936acb65-rootfs.mount: Deactivated successfully. Nov 23 23:18:31.555993 kubelet[2667]: I1123 23:18:31.555949 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:18:31.557073 containerd[1506]: time="2025-11-23T23:18:31.557029137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:18:32.469565 kubelet[2667]: E1123 23:18:32.469504 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:34.470405 kubelet[2667]: E1123 23:18:34.470331 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:34.657672 containerd[1506]: time="2025-11-23T23:18:34.657610260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:34.658220 containerd[1506]: time="2025-11-23T23:18:34.658195860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:18:34.658939 containerd[1506]: time="2025-11-23T23:18:34.658893235Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:34.661282 containerd[1506]: time="2025-11-23T23:18:34.661223433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:34.662045 containerd[1506]: time="2025-11-23T23:18:34.661831996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.104759653s" Nov 23 23:18:34.662045 containerd[1506]: time="2025-11-23T23:18:34.661866321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:18:34.665784 containerd[1506]: time="2025-11-23T23:18:34.665746770Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:18:34.675092 containerd[1506]: time="2025-11-23T23:18:34.674581736Z" level=info msg="Container 57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:34.682248 containerd[1506]: time="2025-11-23T23:18:34.682194415Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439\"" Nov 23 23:18:34.682892 containerd[1506]: time="2025-11-23T23:18:34.682738569Z" level=info msg="StartContainer for \"57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439\"" Nov 23 23:18:34.685019 containerd[1506]: time="2025-11-23T23:18:34.684891143Z" level=info msg="connecting to shim 57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439" address="unix:///run/containerd/s/f7984adcf69f6fb2503e3425b305182df30fb189cf36a638a7a4fb9a0641616b" protocol=ttrpc version=3 Nov 23 23:18:34.715258 systemd[1]: Started cri-containerd-57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439.scope - libcontainer container 57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439. Nov 23 23:18:34.911810 containerd[1506]: time="2025-11-23T23:18:34.911749022Z" level=info msg="StartContainer for \"57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439\" returns successfully" Nov 23 23:18:35.374915 systemd[1]: cri-containerd-57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439.scope: Deactivated successfully. Nov 23 23:18:35.375213 systemd[1]: cri-containerd-57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439.scope: Consumed 484ms CPU time, 176.8M memory peak, 272K read from disk, 165.9M written to disk. Nov 23 23:18:35.377946 containerd[1506]: time="2025-11-23T23:18:35.377249059Z" level=info msg="received container exit event container_id:\"57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439\" id:\"57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439\" pid:3449 exited_at:{seconds:1763939915 nanos:376843207}" Nov 23 23:18:35.400835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57acc73e89a9543355a2d4d4b2cb7288a4edcf45d94d159b6b52d9f5bafa0439-rootfs.mount: Deactivated successfully. Nov 23 23:18:35.407541 kubelet[2667]: I1123 23:18:35.407509 2667 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 23 23:18:35.471302 systemd[1]: Created slice kubepods-burstable-podaf0c333b_c43b_42f7_a468_945f73c1365e.slice - libcontainer container kubepods-burstable-podaf0c333b_c43b_42f7_a468_945f73c1365e.slice. Nov 23 23:18:35.480617 systemd[1]: Created slice kubepods-burstable-podbe86530a_9db5_4cbc_a5f6_d5851a9dfe01.slice - libcontainer container kubepods-burstable-podbe86530a_9db5_4cbc_a5f6_d5851a9dfe01.slice. Nov 23 23:18:35.494497 systemd[1]: Created slice kubepods-besteffort-pod2fa08b76_cbac_4d84_b388_cf070905dbf5.slice - libcontainer container kubepods-besteffort-pod2fa08b76_cbac_4d84_b388_cf070905dbf5.slice. Nov 23 23:18:35.502917 systemd[1]: Created slice kubepods-besteffort-pod6e697a16_c70a_4db1_9f7e_9b02bca06e22.slice - libcontainer container kubepods-besteffort-pod6e697a16_c70a_4db1_9f7e_9b02bca06e22.slice. Nov 23 23:18:35.509740 systemd[1]: Created slice kubepods-besteffort-poda42a1cdf_25c9_4cc0_97f9_6bc1707a2182.slice - libcontainer container kubepods-besteffort-poda42a1cdf_25c9_4cc0_97f9_6bc1707a2182.slice. Nov 23 23:18:35.515782 systemd[1]: Created slice kubepods-besteffort-pod0cf0318d_6de7_4a09_88fe_94af28c80c28.slice - libcontainer container kubepods-besteffort-pod0cf0318d_6de7_4a09_88fe_94af28c80c28.slice. Nov 23 23:18:35.523939 systemd[1]: Created slice kubepods-besteffort-pod99271428_091e_451d_a8a9_b200a082e2d3.slice - libcontainer container kubepods-besteffort-pod99271428_091e_451d_a8a9_b200a082e2d3.slice. Nov 23 23:18:35.559632 kubelet[2667]: I1123 23:18:35.559585 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af0c333b-c43b-42f7-a468-945f73c1365e-config-volume\") pod \"coredns-66bc5c9577-d7q7t\" (UID: \"af0c333b-c43b-42f7-a468-945f73c1365e\") " pod="kube-system/coredns-66bc5c9577-d7q7t" Nov 23 23:18:35.559632 kubelet[2667]: I1123 23:18:35.559633 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be86530a-9db5-4cbc-a5f6-d5851a9dfe01-config-volume\") pod \"coredns-66bc5c9577-b8l7s\" (UID: \"be86530a-9db5-4cbc-a5f6-d5851a9dfe01\") " pod="kube-system/coredns-66bc5c9577-b8l7s" Nov 23 23:18:35.560075 kubelet[2667]: I1123 23:18:35.559652 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckpqj\" (UniqueName: \"kubernetes.io/projected/be86530a-9db5-4cbc-a5f6-d5851a9dfe01-kube-api-access-ckpqj\") pod \"coredns-66bc5c9577-b8l7s\" (UID: \"be86530a-9db5-4cbc-a5f6-d5851a9dfe01\") " pod="kube-system/coredns-66bc5c9577-b8l7s" Nov 23 23:18:35.560075 kubelet[2667]: I1123 23:18:35.559672 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-ca-bundle\") pod \"whisker-75fb6c4c57-57dl7\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " pod="calico-system/whisker-75fb6c4c57-57dl7" Nov 23 23:18:35.560075 kubelet[2667]: I1123 23:18:35.559690 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dpq\" (UniqueName: \"kubernetes.io/projected/a42a1cdf-25c9-4cc0-97f9-6bc1707a2182-kube-api-access-v2dpq\") pod \"calico-apiserver-77bb69bc5c-4pslb\" (UID: \"a42a1cdf-25c9-4cc0-97f9-6bc1707a2182\") " pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" Nov 23 23:18:35.560075 kubelet[2667]: I1123 23:18:35.559709 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7qc\" (UniqueName: \"kubernetes.io/projected/2fa08b76-cbac-4d84-b388-cf070905dbf5-kube-api-access-7p7qc\") pod \"whisker-75fb6c4c57-57dl7\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " pod="calico-system/whisker-75fb6c4c57-57dl7" Nov 23 23:18:35.560075 kubelet[2667]: I1123 23:18:35.559729 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0cf0318d-6de7-4a09-88fe-94af28c80c28-calico-apiserver-certs\") pod \"calico-apiserver-77bb69bc5c-r2rtw\" (UID: \"0cf0318d-6de7-4a09-88fe-94af28c80c28\") " pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" Nov 23 23:18:35.560749 kubelet[2667]: I1123 23:18:35.559749 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9hc\" (UniqueName: \"kubernetes.io/projected/0cf0318d-6de7-4a09-88fe-94af28c80c28-kube-api-access-6t9hc\") pod \"calico-apiserver-77bb69bc5c-r2rtw\" (UID: \"0cf0318d-6de7-4a09-88fe-94af28c80c28\") " pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" Nov 23 23:18:35.560749 kubelet[2667]: I1123 23:18:35.559765 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e697a16-c70a-4db1-9f7e-9b02bca06e22-config\") pod \"goldmane-7c778bb748-zjz79\" (UID: \"6e697a16-c70a-4db1-9f7e-9b02bca06e22\") " pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.560749 kubelet[2667]: I1123 23:18:35.559781 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426qm\" (UniqueName: \"kubernetes.io/projected/6e697a16-c70a-4db1-9f7e-9b02bca06e22-kube-api-access-426qm\") pod \"goldmane-7c778bb748-zjz79\" (UID: \"6e697a16-c70a-4db1-9f7e-9b02bca06e22\") " pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.560749 kubelet[2667]: I1123 23:18:35.559796 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624dk\" (UniqueName: \"kubernetes.io/projected/99271428-091e-451d-a8a9-b200a082e2d3-kube-api-access-624dk\") pod \"calico-kube-controllers-664f9fcd4f-9fzcc\" (UID: \"99271428-091e-451d-a8a9-b200a082e2d3\") " pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" Nov 23 23:18:35.560749 kubelet[2667]: I1123 23:18:35.559825 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a42a1cdf-25c9-4cc0-97f9-6bc1707a2182-calico-apiserver-certs\") pod \"calico-apiserver-77bb69bc5c-4pslb\" (UID: \"a42a1cdf-25c9-4cc0-97f9-6bc1707a2182\") " pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" Nov 23 23:18:35.560894 kubelet[2667]: I1123 23:18:35.559845 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e697a16-c70a-4db1-9f7e-9b02bca06e22-goldmane-key-pair\") pod \"goldmane-7c778bb748-zjz79\" (UID: \"6e697a16-c70a-4db1-9f7e-9b02bca06e22\") " pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.560894 kubelet[2667]: I1123 23:18:35.559866 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-backend-key-pair\") pod \"whisker-75fb6c4c57-57dl7\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " pod="calico-system/whisker-75fb6c4c57-57dl7" Nov 23 23:18:35.560894 kubelet[2667]: I1123 23:18:35.559883 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992j7\" (UniqueName: \"kubernetes.io/projected/af0c333b-c43b-42f7-a468-945f73c1365e-kube-api-access-992j7\") pod \"coredns-66bc5c9577-d7q7t\" (UID: \"af0c333b-c43b-42f7-a468-945f73c1365e\") " pod="kube-system/coredns-66bc5c9577-d7q7t" Nov 23 23:18:35.560894 kubelet[2667]: I1123 23:18:35.559897 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e697a16-c70a-4db1-9f7e-9b02bca06e22-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zjz79\" (UID: \"6e697a16-c70a-4db1-9f7e-9b02bca06e22\") " pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.560894 kubelet[2667]: I1123 23:18:35.559919 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99271428-091e-451d-a8a9-b200a082e2d3-tigera-ca-bundle\") pod \"calico-kube-controllers-664f9fcd4f-9fzcc\" (UID: \"99271428-091e-451d-a8a9-b200a082e2d3\") " pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" Nov 23 23:18:35.572125 containerd[1506]: time="2025-11-23T23:18:35.572073545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:18:35.780305 containerd[1506]: time="2025-11-23T23:18:35.780201893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7q7t,Uid:af0c333b-c43b-42f7-a468-945f73c1365e,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:35.787900 containerd[1506]: time="2025-11-23T23:18:35.787859273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8l7s,Uid:be86530a-9db5-4cbc-a5f6-d5851a9dfe01,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:35.801831 containerd[1506]: time="2025-11-23T23:18:35.801792615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fb6c4c57-57dl7,Uid:2fa08b76-cbac-4d84-b388-cf070905dbf5,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:35.809259 containerd[1506]: time="2025-11-23T23:18:35.809216965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjz79,Uid:6e697a16-c70a-4db1-9f7e-9b02bca06e22,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:35.815477 containerd[1506]: time="2025-11-23T23:18:35.815429960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-4pslb,Uid:a42a1cdf-25c9-4cc0-97f9-6bc1707a2182,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:18:35.825330 containerd[1506]: time="2025-11-23T23:18:35.825285941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-r2rtw,Uid:0cf0318d-6de7-4a09-88fe-94af28c80c28,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:18:35.837960 containerd[1506]: time="2025-11-23T23:18:35.837810384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664f9fcd4f-9fzcc,Uid:99271428-091e-451d-a8a9-b200a082e2d3,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:35.903491 containerd[1506]: time="2025-11-23T23:18:35.903440140Z" level=error msg="Failed to destroy network for sandbox \"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.905249 containerd[1506]: time="2025-11-23T23:18:35.905192524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fb6c4c57-57dl7,Uid:2fa08b76-cbac-4d84-b388-cf070905dbf5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.905677 kubelet[2667]: E1123 23:18:35.905632 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.905768 kubelet[2667]: E1123 23:18:35.905709 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75fb6c4c57-57dl7" Nov 23 23:18:35.905768 kubelet[2667]: E1123 23:18:35.905730 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75fb6c4c57-57dl7" Nov 23 23:18:35.905829 kubelet[2667]: E1123 23:18:35.905777 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75fb6c4c57-57dl7_calico-system(2fa08b76-cbac-4d84-b388-cf070905dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75fb6c4c57-57dl7_calico-system(2fa08b76-cbac-4d84-b388-cf070905dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a9cf1b4e2843362e21049759715161ba9abf0219f34bb4392cf36fb67f44b5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75fb6c4c57-57dl7" podUID="2fa08b76-cbac-4d84-b388-cf070905dbf5" Nov 23 23:18:35.913377 containerd[1506]: time="2025-11-23T23:18:35.913313123Z" level=error msg="Failed to destroy network for sandbox \"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.915873 containerd[1506]: time="2025-11-23T23:18:35.915820244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7q7t,Uid:af0c333b-c43b-42f7-a468-945f73c1365e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.916136 kubelet[2667]: E1123 23:18:35.916060 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.916136 kubelet[2667]: E1123 23:18:35.916124 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d7q7t" Nov 23 23:18:35.916236 kubelet[2667]: E1123 23:18:35.916151 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d7q7t" Nov 23 23:18:35.916236 kubelet[2667]: E1123 23:18:35.916206 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d7q7t_kube-system(af0c333b-c43b-42f7-a468-945f73c1365e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d7q7t_kube-system(af0c333b-c43b-42f7-a468-945f73c1365e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fface9be1179c321dd62168cb1ffc21ede904cb1bf499fe0e248307e8c98f5de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-d7q7t" podUID="af0c333b-c43b-42f7-a468-945f73c1365e" Nov 23 23:18:35.921404 containerd[1506]: time="2025-11-23T23:18:35.920931778Z" level=error msg="Failed to destroy network for sandbox \"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.922266 containerd[1506]: time="2025-11-23T23:18:35.922224464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8l7s,Uid:be86530a-9db5-4cbc-a5f6-d5851a9dfe01,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.922467 kubelet[2667]: E1123 23:18:35.922430 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.922519 kubelet[2667]: E1123 23:18:35.922483 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b8l7s" Nov 23 23:18:35.922519 kubelet[2667]: E1123 23:18:35.922502 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-b8l7s" Nov 23 23:18:35.922567 kubelet[2667]: E1123 23:18:35.922548 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-b8l7s_kube-system(be86530a-9db5-4cbc-a5f6-d5851a9dfe01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-b8l7s_kube-system(be86530a-9db5-4cbc-a5f6-d5851a9dfe01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e5ef803501c691429ac01567f13e31ac7540aac65182735056265b66b5ad6f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-b8l7s" podUID="be86530a-9db5-4cbc-a5f6-d5851a9dfe01" Nov 23 23:18:35.929714 containerd[1506]: time="2025-11-23T23:18:35.929238201Z" level=error msg="Failed to destroy network for sandbox \"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.931152 containerd[1506]: time="2025-11-23T23:18:35.931089638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-4pslb,Uid:a42a1cdf-25c9-4cc0-97f9-6bc1707a2182,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.931537 kubelet[2667]: E1123 23:18:35.931340 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.931537 kubelet[2667]: E1123 23:18:35.931397 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" Nov 23 23:18:35.931537 kubelet[2667]: E1123 23:18:35.931417 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" Nov 23 23:18:35.931636 kubelet[2667]: E1123 23:18:35.931469 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb69bc5c-4pslb_calico-apiserver(a42a1cdf-25c9-4cc0-97f9-6bc1707a2182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb69bc5c-4pslb_calico-apiserver(a42a1cdf-25c9-4cc0-97f9-6bc1707a2182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a82105d7b70e3b8eeb687245013da8f5a2beea5637cd55e58fe756a5daf330a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" podUID="a42a1cdf-25c9-4cc0-97f9-6bc1707a2182" Nov 23 23:18:35.940823 containerd[1506]: time="2025-11-23T23:18:35.940772397Z" level=error msg="Failed to destroy network for sandbox \"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.942884 containerd[1506]: time="2025-11-23T23:18:35.942805617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664f9fcd4f-9fzcc,Uid:99271428-091e-451d-a8a9-b200a082e2d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.943201 containerd[1506]: time="2025-11-23T23:18:35.943174264Z" level=error msg="Failed to destroy network for sandbox \"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.943435 kubelet[2667]: E1123 23:18:35.943399 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.943497 kubelet[2667]: E1123 23:18:35.943470 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" Nov 23 23:18:35.943534 kubelet[2667]: E1123 23:18:35.943502 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" Nov 23 23:18:35.943619 kubelet[2667]: E1123 23:18:35.943579 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-664f9fcd4f-9fzcc_calico-system(99271428-091e-451d-a8a9-b200a082e2d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-664f9fcd4f-9fzcc_calico-system(99271428-091e-451d-a8a9-b200a082e2d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"176d705671b72c3eb5998a7a78aebf65ac42b49cd0cd324f1ff7734b84e177be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:18:35.944510 containerd[1506]: time="2025-11-23T23:18:35.944465349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-r2rtw,Uid:0cf0318d-6de7-4a09-88fe-94af28c80c28,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.944747 kubelet[2667]: E1123 23:18:35.944692 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.944784 kubelet[2667]: E1123 23:18:35.944759 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" Nov 23 23:18:35.944818 kubelet[2667]: E1123 23:18:35.944802 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" Nov 23 23:18:35.947354 kubelet[2667]: E1123 23:18:35.944852 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb69bc5c-r2rtw_calico-apiserver(0cf0318d-6de7-4a09-88fe-94af28c80c28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb69bc5c-r2rtw_calico-apiserver(0cf0318d-6de7-4a09-88fe-94af28c80c28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33cc506bd115f4eca768faf6582cadb029bda1cc3d02ea78764e3da3b6196b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:18:35.948455 containerd[1506]: time="2025-11-23T23:18:35.948342885Z" level=error msg="Failed to destroy network for sandbox \"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.949440 containerd[1506]: time="2025-11-23T23:18:35.949408381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjz79,Uid:6e697a16-c70a-4db1-9f7e-9b02bca06e22,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.949921 kubelet[2667]: E1123 23:18:35.949891 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:35.949985 kubelet[2667]: E1123 23:18:35.949933 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.949985 kubelet[2667]: E1123 23:18:35.949949 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zjz79" Nov 23 23:18:35.950656 kubelet[2667]: E1123 23:18:35.949994 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zjz79_calico-system(6e697a16-c70a-4db1-9f7e-9b02bca06e22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zjz79_calico-system(6e697a16-c70a-4db1-9f7e-9b02bca06e22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bef3d2e8562c43260c73d929d9f80b328beea960e8b49301677baf7af94fb21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zjz79" podUID="6e697a16-c70a-4db1-9f7e-9b02bca06e22" Nov 23 23:18:36.480817 systemd[1]: Created slice kubepods-besteffort-poda03beb28_b117_427c_9d7b_d6307465538e.slice - libcontainer container kubepods-besteffort-poda03beb28_b117_427c_9d7b_d6307465538e.slice. Nov 23 23:18:36.488492 containerd[1506]: time="2025-11-23T23:18:36.488436768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlht4,Uid:a03beb28-b117-427c-9d7b-d6307465538e,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:36.556320 containerd[1506]: time="2025-11-23T23:18:36.556271024Z" level=error msg="Failed to destroy network for sandbox \"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:36.559241 containerd[1506]: time="2025-11-23T23:18:36.557637428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlht4,Uid:a03beb28-b117-427c-9d7b-d6307465538e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:36.559385 kubelet[2667]: E1123 23:18:36.557881 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:18:36.559385 kubelet[2667]: E1123 23:18:36.557934 2667 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:36.559385 kubelet[2667]: E1123 23:18:36.557953 2667 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mlht4" Nov 23 23:18:36.559463 kubelet[2667]: E1123 23:18:36.557998 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fd46b80d9d09546618bf2dd6ee28a9401f3e3384b586907179063f571b1fece\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:36.675368 systemd[1]: run-netns-cni\x2dac724f35\x2db4ff\x2d454d\x2d4e1f\x2d48fe15fae056.mount: Deactivated successfully. Nov 23 23:18:36.675461 systemd[1]: run-netns-cni\x2d18018f98\x2d35ce\x2dc91f\x2d9bca\x2d739f7af75f18.mount: Deactivated successfully. Nov 23 23:18:39.464843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969801588.mount: Deactivated successfully. Nov 23 23:18:39.841428 containerd[1506]: time="2025-11-23T23:18:39.841032740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:39.842133 containerd[1506]: time="2025-11-23T23:18:39.842098254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:18:39.843057 containerd[1506]: time="2025-11-23T23:18:39.843024354Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:39.845211 containerd[1506]: time="2025-11-23T23:18:39.845173985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:18:39.845773 containerd[1506]: time="2025-11-23T23:18:39.845743686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.273525522s" Nov 23 23:18:39.845809 containerd[1506]: time="2025-11-23T23:18:39.845775490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:18:39.869710 containerd[1506]: time="2025-11-23T23:18:39.869665899Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:18:39.896016 containerd[1506]: time="2025-11-23T23:18:39.895958727Z" level=info msg="Container 7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:39.916399 containerd[1506]: time="2025-11-23T23:18:39.916336319Z" level=info msg="CreateContainer within sandbox \"c52e45d8039335c2faf4f2a16dcdc095cefce72a9832f87a77e8ebd981930209\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93\"" Nov 23 23:18:39.917072 containerd[1506]: time="2025-11-23T23:18:39.917043195Z" level=info msg="StartContainer for \"7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93\"" Nov 23 23:18:39.918707 containerd[1506]: time="2025-11-23T23:18:39.918534995Z" level=info msg="connecting to shim 7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93" address="unix:///run/containerd/s/f7984adcf69f6fb2503e3425b305182df30fb189cf36a638a7a4fb9a0641616b" protocol=ttrpc version=3 Nov 23 23:18:39.965232 systemd[1]: Started cri-containerd-7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93.scope - libcontainer container 7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93. Nov 23 23:18:40.188776 containerd[1506]: time="2025-11-23T23:18:40.188424895Z" level=info msg="StartContainer for \"7343cd6fcba9bb973eebf6cd194dc53eabc2d0cd3b0e84cdfaa4dadf70a2ce93\" returns successfully" Nov 23 23:18:40.205637 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:18:40.205750 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:18:40.389514 kubelet[2667]: I1123 23:18:40.389473 2667 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-ca-bundle\") pod \"2fa08b76-cbac-4d84-b388-cf070905dbf5\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " Nov 23 23:18:40.389514 kubelet[2667]: I1123 23:18:40.389523 2667 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-backend-key-pair\") pod \"2fa08b76-cbac-4d84-b388-cf070905dbf5\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " Nov 23 23:18:40.389909 kubelet[2667]: I1123 23:18:40.389570 2667 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p7qc\" (UniqueName: \"kubernetes.io/projected/2fa08b76-cbac-4d84-b388-cf070905dbf5-kube-api-access-7p7qc\") pod \"2fa08b76-cbac-4d84-b388-cf070905dbf5\" (UID: \"2fa08b76-cbac-4d84-b388-cf070905dbf5\") " Nov 23 23:18:40.398514 kubelet[2667]: I1123 23:18:40.398467 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2fa08b76-cbac-4d84-b388-cf070905dbf5" (UID: "2fa08b76-cbac-4d84-b388-cf070905dbf5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:18:40.403372 kubelet[2667]: I1123 23:18:40.403306 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2fa08b76-cbac-4d84-b388-cf070905dbf5" (UID: "2fa08b76-cbac-4d84-b388-cf070905dbf5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:18:40.403372 kubelet[2667]: I1123 23:18:40.403319 2667 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa08b76-cbac-4d84-b388-cf070905dbf5-kube-api-access-7p7qc" (OuterVolumeSpecName: "kube-api-access-7p7qc") pod "2fa08b76-cbac-4d84-b388-cf070905dbf5" (UID: "2fa08b76-cbac-4d84-b388-cf070905dbf5"). InnerVolumeSpecName "kube-api-access-7p7qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:18:40.466321 systemd[1]: var-lib-kubelet-pods-2fa08b76\x2dcbac\x2d4d84\x2db388\x2dcf070905dbf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7p7qc.mount: Deactivated successfully. Nov 23 23:18:40.466420 systemd[1]: var-lib-kubelet-pods-2fa08b76\x2dcbac\x2d4d84\x2db388\x2dcf070905dbf5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:18:40.490403 kubelet[2667]: I1123 23:18:40.490333 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7p7qc\" (UniqueName: \"kubernetes.io/projected/2fa08b76-cbac-4d84-b388-cf070905dbf5-kube-api-access-7p7qc\") on node \"localhost\" DevicePath \"\"" Nov 23 23:18:40.490403 kubelet[2667]: I1123 23:18:40.490369 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 23 23:18:40.490403 kubelet[2667]: I1123 23:18:40.490378 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2fa08b76-cbac-4d84-b388-cf070905dbf5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 23 23:18:40.869311 systemd[1]: Removed slice kubepods-besteffort-pod2fa08b76_cbac_4d84_b388_cf070905dbf5.slice - libcontainer container kubepods-besteffort-pod2fa08b76_cbac_4d84_b388_cf070905dbf5.slice. Nov 23 23:18:40.906273 kubelet[2667]: I1123 23:18:40.905949 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-chz59" podStartSLOduration=2.453316538 podStartE2EDuration="13.895217205s" podCreationTimestamp="2025-11-23 23:18:27 +0000 UTC" firstStartedPulling="2025-11-23 23:18:28.404713313 +0000 UTC m=+21.031935096" lastFinishedPulling="2025-11-23 23:18:39.84661398 +0000 UTC m=+32.473835763" observedRunningTime="2025-11-23 23:18:40.88483111 +0000 UTC m=+33.512052893" watchObservedRunningTime="2025-11-23 23:18:40.895217205 +0000 UTC m=+33.522438988" Nov 23 23:18:40.950812 systemd[1]: Created slice kubepods-besteffort-pod1911fb13_5d38_47f5_98a7_ab4a295196c4.slice - libcontainer container kubepods-besteffort-pod1911fb13_5d38_47f5_98a7_ab4a295196c4.slice. Nov 23 23:18:40.994238 kubelet[2667]: I1123 23:18:40.994198 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1911fb13-5d38-47f5-98a7-ab4a295196c4-whisker-ca-bundle\") pod \"whisker-5dfffb7d85-g57l4\" (UID: \"1911fb13-5d38-47f5-98a7-ab4a295196c4\") " pod="calico-system/whisker-5dfffb7d85-g57l4" Nov 23 23:18:40.994238 kubelet[2667]: I1123 23:18:40.994242 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1911fb13-5d38-47f5-98a7-ab4a295196c4-whisker-backend-key-pair\") pod \"whisker-5dfffb7d85-g57l4\" (UID: \"1911fb13-5d38-47f5-98a7-ab4a295196c4\") " pod="calico-system/whisker-5dfffb7d85-g57l4" Nov 23 23:18:40.994405 kubelet[2667]: I1123 23:18:40.994262 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287bn\" (UniqueName: \"kubernetes.io/projected/1911fb13-5d38-47f5-98a7-ab4a295196c4-kube-api-access-287bn\") pod \"whisker-5dfffb7d85-g57l4\" (UID: \"1911fb13-5d38-47f5-98a7-ab4a295196c4\") " pod="calico-system/whisker-5dfffb7d85-g57l4" Nov 23 23:18:41.262186 containerd[1506]: time="2025-11-23T23:18:41.262096029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfffb7d85-g57l4,Uid:1911fb13-5d38-47f5-98a7-ab4a295196c4,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:41.423946 systemd-networkd[1437]: calicd15cd2860d: Link UP Nov 23 23:18:41.424701 systemd-networkd[1437]: calicd15cd2860d: Gained carrier Nov 23 23:18:41.438865 containerd[1506]: 2025-11-23 23:18:41.286 [INFO][3826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:18:41.438865 containerd[1506]: 2025-11-23 23:18:41.326 [INFO][3826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5dfffb7d85--g57l4-eth0 whisker-5dfffb7d85- calico-system 1911fb13-5d38-47f5-98a7-ab4a295196c4 857 0 2025-11-23 23:18:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dfffb7d85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5dfffb7d85-g57l4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicd15cd2860d [] [] }} ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-" Nov 23 23:18:41.438865 containerd[1506]: 2025-11-23 23:18:41.326 [INFO][3826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.438865 containerd[1506]: 2025-11-23 23:18:41.379 [INFO][3839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" HandleID="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Workload="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.379 [INFO][3839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" HandleID="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Workload="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004941a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5dfffb7d85-g57l4", "timestamp":"2025-11-23 23:18:41.379589514 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.379 [INFO][3839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.379 [INFO][3839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.379 [INFO][3839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.389 [INFO][3839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" host="localhost" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.395 [INFO][3839] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.399 [INFO][3839] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.401 [INFO][3839] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.403 [INFO][3839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:41.439104 containerd[1506]: 2025-11-23 23:18:41.403 [INFO][3839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" host="localhost" Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.405 [INFO][3839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42 Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.409 [INFO][3839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" host="localhost" Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.414 [INFO][3839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" host="localhost" Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.414 [INFO][3839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" host="localhost" Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.414 [INFO][3839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:41.439294 containerd[1506]: 2025-11-23 23:18:41.414 [INFO][3839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" HandleID="k8s-pod-network.938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Workload="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.439411 containerd[1506]: 2025-11-23 23:18:41.416 [INFO][3826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dfffb7d85--g57l4-eth0", GenerateName:"whisker-5dfffb7d85-", Namespace:"calico-system", SelfLink:"", UID:"1911fb13-5d38-47f5-98a7-ab4a295196c4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dfffb7d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5dfffb7d85-g57l4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd15cd2860d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:41.439411 containerd[1506]: 2025-11-23 23:18:41.417 [INFO][3826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.439475 containerd[1506]: 2025-11-23 23:18:41.417 [INFO][3826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd15cd2860d ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.439475 containerd[1506]: 2025-11-23 23:18:41.424 [INFO][3826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.439513 containerd[1506]: 2025-11-23 23:18:41.426 [INFO][3826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5dfffb7d85--g57l4-eth0", GenerateName:"whisker-5dfffb7d85-", Namespace:"calico-system", SelfLink:"", UID:"1911fb13-5d38-47f5-98a7-ab4a295196c4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dfffb7d85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42", Pod:"whisker-5dfffb7d85-g57l4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd15cd2860d", MAC:"de:50:39:eb:2b:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:41.439559 containerd[1506]: 2025-11-23 23:18:41.436 [INFO][3826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" Namespace="calico-system" Pod="whisker-5dfffb7d85-g57l4" WorkloadEndpoint="localhost-k8s-whisker--5dfffb7d85--g57l4-eth0" Nov 23 23:18:41.472950 kubelet[2667]: I1123 23:18:41.472908 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa08b76-cbac-4d84-b388-cf070905dbf5" path="/var/lib/kubelet/pods/2fa08b76-cbac-4d84-b388-cf070905dbf5/volumes" Nov 23 23:18:41.499673 containerd[1506]: time="2025-11-23T23:18:41.499623953Z" level=info msg="connecting to shim 938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42" address="unix:///run/containerd/s/1ea31514d206e4eb01b2feb995b23e906a80969a24811e8a115447d806147534" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:41.526212 systemd[1]: Started cri-containerd-938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42.scope - libcontainer container 938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42. Nov 23 23:18:41.561174 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:41.641234 containerd[1506]: time="2025-11-23T23:18:41.640587829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dfffb7d85-g57l4,Uid:1911fb13-5d38-47f5-98a7-ab4a295196c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"938e46c2febe68a1a27ae42af90d6692288e755464a791c6e399221b6b50be42\"" Nov 23 23:18:41.644523 containerd[1506]: time="2025-11-23T23:18:41.643436207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:18:41.862929 containerd[1506]: time="2025-11-23T23:18:41.862671135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:41.902632 containerd[1506]: time="2025-11-23T23:18:41.902555458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:18:41.902936 containerd[1506]: time="2025-11-23T23:18:41.902589780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:18:41.903515 kubelet[2667]: E1123 23:18:41.903337 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:18:41.904149 kubelet[2667]: E1123 23:18:41.903947 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:18:41.906866 kubelet[2667]: E1123 23:18:41.906819 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dfffb7d85-g57l4_calico-system(1911fb13-5d38-47f5-98a7-ab4a295196c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:41.907705 containerd[1506]: time="2025-11-23T23:18:41.907671448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:18:42.124473 containerd[1506]: time="2025-11-23T23:18:42.124432647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:42.130411 containerd[1506]: time="2025-11-23T23:18:42.130355126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:18:42.130490 containerd[1506]: time="2025-11-23T23:18:42.130446773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:18:42.130684 kubelet[2667]: E1123 23:18:42.130635 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:18:42.130748 kubelet[2667]: E1123 23:18:42.130696 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:18:42.130778 kubelet[2667]: E1123 23:18:42.130767 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dfffb7d85-g57l4_calico-system(1911fb13-5d38-47f5-98a7-ab4a295196c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:42.130839 kubelet[2667]: E1123 23:18:42.130806 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dfffb7d85-g57l4" podUID="1911fb13-5d38-47f5-98a7-ab4a295196c4" Nov 23 23:18:42.873869 kubelet[2667]: E1123 23:18:42.873718 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dfffb7d85-g57l4" podUID="1911fb13-5d38-47f5-98a7-ab4a295196c4" Nov 23 23:18:43.163182 systemd-networkd[1437]: calicd15cd2860d: Gained IPv6LL Nov 23 23:18:46.284550 kubelet[2667]: I1123 23:18:46.284501 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:18:46.474196 containerd[1506]: time="2025-11-23T23:18:46.474112076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7q7t,Uid:af0c333b-c43b-42f7-a468-945f73c1365e,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:46.475137 containerd[1506]: time="2025-11-23T23:18:46.475106022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664f9fcd4f-9fzcc,Uid:99271428-091e-451d-a8a9-b200a082e2d3,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:46.590065 systemd-networkd[1437]: cali9968c59bac0: Link UP Nov 23 23:18:46.592553 systemd-networkd[1437]: cali9968c59bac0: Gained carrier Nov 23 23:18:46.613142 containerd[1506]: 2025-11-23 23:18:46.503 [INFO][4156] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:18:46.613142 containerd[1506]: 2025-11-23 23:18:46.520 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--d7q7t-eth0 coredns-66bc5c9577- kube-system af0c333b-c43b-42f7-a468-945f73c1365e 788 0 2025-11-23 23:18:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-d7q7t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9968c59bac0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-" Nov 23 23:18:46.613142 containerd[1506]: 2025-11-23 23:18:46.520 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613142 containerd[1506]: 2025-11-23 23:18:46.547 [INFO][4186] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" HandleID="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Workload="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.548 [INFO][4186] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" HandleID="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Workload="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-d7q7t", "timestamp":"2025-11-23 23:18:46.54783949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.548 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.548 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.548 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.558 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" host="localhost" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.563 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.567 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.570 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.572 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:46.613367 containerd[1506]: 2025-11-23 23:18:46.572 [INFO][4186] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" host="localhost" Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.574 [INFO][4186] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45 Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.579 [INFO][4186] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" host="localhost" Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4186] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" host="localhost" Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" host="localhost" Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:46.613585 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4186] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" HandleID="k8s-pod-network.5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Workload="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.586 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--d7q7t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"af0c333b-c43b-42f7-a468-945f73c1365e", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-d7q7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9968c59bac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.586 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.586 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9968c59bac0 ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.588 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.588 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--d7q7t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"af0c333b-c43b-42f7-a468-945f73c1365e", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45", Pod:"coredns-66bc5c9577-d7q7t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9968c59bac0", MAC:"3a:97:fd:bd:f6:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:46.613695 containerd[1506]: 2025-11-23 23:18:46.604 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" Namespace="kube-system" Pod="coredns-66bc5c9577-d7q7t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--d7q7t-eth0" Nov 23 23:18:46.640770 containerd[1506]: time="2025-11-23T23:18:46.640500081Z" level=info msg="connecting to shim 5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45" address="unix:///run/containerd/s/be677fb80abee389b3079dd580749affac2d4014638d7fa28d9c3cbdfd9206c2" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:46.664232 systemd[1]: Started cri-containerd-5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45.scope - libcontainer container 5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45. Nov 23 23:18:46.679662 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:46.714627 containerd[1506]: time="2025-11-23T23:18:46.714571918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d7q7t,Uid:af0c333b-c43b-42f7-a468-945f73c1365e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45\"" Nov 23 23:18:46.729670 systemd-networkd[1437]: cali9c227906377: Link UP Nov 23 23:18:46.730143 systemd-networkd[1437]: cali9c227906377: Gained carrier Nov 23 23:18:46.783527 containerd[1506]: time="2025-11-23T23:18:46.783488013Z" level=info msg="CreateContainer within sandbox \"5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.508 [INFO][4168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.528 [INFO][4168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0 calico-kube-controllers-664f9fcd4f- calico-system 99271428-091e-451d-a8a9-b200a082e2d3 797 0 2025-11-23 23:18:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:664f9fcd4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-664f9fcd4f-9fzcc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9c227906377 [] [] }} ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.528 [INFO][4168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.558 [INFO][4193] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" HandleID="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Workload="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.558 [INFO][4193] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" HandleID="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Workload="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-664f9fcd4f-9fzcc", "timestamp":"2025-11-23 23:18:46.558288504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.558 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.584 [INFO][4193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.659 [INFO][4193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.665 [INFO][4193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.672 [INFO][4193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.674 [INFO][4193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.678 [INFO][4193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.678 [INFO][4193] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.680 [INFO][4193] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406 Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.692 [INFO][4193] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.725 [INFO][4193] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.725 [INFO][4193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" host="localhost" Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.725 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:46.791022 containerd[1506]: 2025-11-23 23:18:46.725 [INFO][4193] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" HandleID="k8s-pod-network.a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Workload="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.727 [INFO][4168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0", GenerateName:"calico-kube-controllers-664f9fcd4f-", Namespace:"calico-system", SelfLink:"", UID:"99271428-091e-451d-a8a9-b200a082e2d3", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664f9fcd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-664f9fcd4f-9fzcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c227906377", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.727 [INFO][4168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.728 [INFO][4168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c227906377 ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.729 [INFO][4168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.729 [INFO][4168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0", GenerateName:"calico-kube-controllers-664f9fcd4f-", Namespace:"calico-system", SelfLink:"", UID:"99271428-091e-451d-a8a9-b200a082e2d3", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664f9fcd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406", Pod:"calico-kube-controllers-664f9fcd4f-9fzcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9c227906377", MAC:"e6:ae:a1:18:0e:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:46.791582 containerd[1506]: 2025-11-23 23:18:46.787 [INFO][4168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" Namespace="calico-system" Pod="calico-kube-controllers-664f9fcd4f-9fzcc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--664f9fcd4f--9fzcc-eth0" Nov 23 23:18:47.107742 containerd[1506]: time="2025-11-23T23:18:47.107680222Z" level=info msg="Container e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:47.171224 containerd[1506]: time="2025-11-23T23:18:47.171162402Z" level=info msg="CreateContainer within sandbox \"5f444b2c29781776f6e2f7f3eb4ede832a47779354bf1840d84edd356fceab45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89\"" Nov 23 23:18:47.172916 containerd[1506]: time="2025-11-23T23:18:47.172871312Z" level=info msg="StartContainer for \"e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89\"" Nov 23 23:18:47.176814 containerd[1506]: time="2025-11-23T23:18:47.176475385Z" level=info msg="connecting to shim e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89" address="unix:///run/containerd/s/be677fb80abee389b3079dd580749affac2d4014638d7fa28d9c3cbdfd9206c2" protocol=ttrpc version=3 Nov 23 23:18:47.198212 systemd[1]: Started cri-containerd-e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89.scope - libcontainer container e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89. Nov 23 23:18:47.211552 containerd[1506]: time="2025-11-23T23:18:47.211426682Z" level=info msg="connecting to shim a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406" address="unix:///run/containerd/s/4b011f093e60e812031e169f23472b6949f0cb37d274325f60f9a3375d899990" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:47.241758 systemd[1]: Started cri-containerd-a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406.scope - libcontainer container a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406. Nov 23 23:18:47.258977 systemd-networkd[1437]: vxlan.calico: Link UP Nov 23 23:18:47.258983 systemd-networkd[1437]: vxlan.calico: Gained carrier Nov 23 23:18:47.288892 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:47.308466 containerd[1506]: time="2025-11-23T23:18:47.308344781Z" level=info msg="StartContainer for \"e6a96915e8eda380930af0f6e57971759bbbf56f2040792b082e3d8b6e035f89\" returns successfully" Nov 23 23:18:47.326145 containerd[1506]: time="2025-11-23T23:18:47.326103208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664f9fcd4f-9fzcc,Uid:99271428-091e-451d-a8a9-b200a082e2d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4ed5e05008b5d7f8536466c75d880202c1bcda4e08915bc18ad3067c26e5406\"" Nov 23 23:18:47.330515 containerd[1506]: time="2025-11-23T23:18:47.330478170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:18:47.482152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178010775.mount: Deactivated successfully. Nov 23 23:18:47.544077 containerd[1506]: time="2025-11-23T23:18:47.544033442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:47.547085 containerd[1506]: time="2025-11-23T23:18:47.546975712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:18:47.547533 containerd[1506]: time="2025-11-23T23:18:47.547493945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:18:47.547846 kubelet[2667]: E1123 23:18:47.547787 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:18:47.548233 kubelet[2667]: E1123 23:18:47.547859 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:18:47.548233 kubelet[2667]: E1123 23:18:47.547932 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-664f9fcd4f-9fzcc_calico-system(99271428-091e-451d-a8a9-b200a082e2d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:47.548233 kubelet[2667]: E1123 23:18:47.547965 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:18:47.669378 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:55258.service - OpenSSH per-connection server daemon (10.0.0.1:55258). Nov 23 23:18:47.735951 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:18:47.737484 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:18:47.741683 systemd-logind[1486]: New session 8 of user core. Nov 23 23:18:47.748190 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:18:47.771243 systemd-networkd[1437]: cali9c227906377: Gained IPv6LL Nov 23 23:18:47.771501 systemd-networkd[1437]: cali9968c59bac0: Gained IPv6LL Nov 23 23:18:47.908771 kubelet[2667]: E1123 23:18:47.908720 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:18:47.937930 kubelet[2667]: I1123 23:18:47.937870 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d7q7t" podStartSLOduration=34.937762389 podStartE2EDuration="34.937762389s" podCreationTimestamp="2025-11-23 23:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:47.937685304 +0000 UTC m=+40.564907087" watchObservedRunningTime="2025-11-23 23:18:47.937762389 +0000 UTC m=+40.564984172" Nov 23 23:18:47.955104 sshd[4488]: Connection closed by 10.0.0.1 port 55258 Nov 23 23:18:47.955428 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:47.960149 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:55258.service: Deactivated successfully. Nov 23 23:18:47.962445 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:18:47.963624 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:18:47.965188 systemd-logind[1486]: Removed session 8. Nov 23 23:18:48.473837 containerd[1506]: time="2025-11-23T23:18:48.473717257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-4pslb,Uid:a42a1cdf-25c9-4cc0-97f9-6bc1707a2182,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:18:48.475535 containerd[1506]: time="2025-11-23T23:18:48.475492528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjz79,Uid:6e697a16-c70a-4db1-9f7e-9b02bca06e22,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:48.604859 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Nov 23 23:18:48.606495 systemd-networkd[1437]: cali8770b9b30e1: Link UP Nov 23 23:18:48.607365 systemd-networkd[1437]: cali8770b9b30e1: Gained carrier Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.524 [INFO][4517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--zjz79-eth0 goldmane-7c778bb748- calico-system 6e697a16-c70a-4db1-9f7e-9b02bca06e22 796 0 2025-11-23 23:18:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-zjz79 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8770b9b30e1 [] [] }} ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.524 [INFO][4517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.554 [INFO][4540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" HandleID="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Workload="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.554 [INFO][4540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" HandleID="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Workload="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-zjz79", "timestamp":"2025-11-23 23:18:48.554161271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.554 [INFO][4540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.554 [INFO][4540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.554 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.566 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.572 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.577 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.579 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.582 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.582 [INFO][4540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.584 [INFO][4540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.589 [INFO][4540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" host="localhost" Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:48.623236 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" HandleID="k8s-pod-network.7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Workload="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.601 [INFO][4517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--zjz79-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6e697a16-c70a-4db1-9f7e-9b02bca06e22", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-zjz79", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8770b9b30e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.601 [INFO][4517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.601 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8770b9b30e1 ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.607 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.608 [INFO][4517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--zjz79-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6e697a16-c70a-4db1-9f7e-9b02bca06e22", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad", Pod:"goldmane-7c778bb748-zjz79", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8770b9b30e1", MAC:"da:85:39:c5:ae:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:48.623947 containerd[1506]: 2025-11-23 23:18:48.619 [INFO][4517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" Namespace="calico-system" Pod="goldmane-7c778bb748-zjz79" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--zjz79-eth0" Nov 23 23:18:48.647327 containerd[1506]: time="2025-11-23T23:18:48.647284643Z" level=info msg="connecting to shim 7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad" address="unix:///run/containerd/s/310f42a6e77cfae44d07f4f4a1b6a8fab02f736b01909d505e1a96140dff48f7" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:48.675469 systemd[1]: Started cri-containerd-7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad.scope - libcontainer container 7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad. Nov 23 23:18:48.690083 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:48.715453 systemd-networkd[1437]: cali754bc594873: Link UP Nov 23 23:18:48.715583 systemd-networkd[1437]: cali754bc594873: Gained carrier Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.516 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0 calico-apiserver-77bb69bc5c- calico-apiserver a42a1cdf-25c9-4cc0-97f9-6bc1707a2182 795 0 2025-11-23 23:18:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bb69bc5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77bb69bc5c-4pslb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali754bc594873 [] [] }} ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.516 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.555 [INFO][4534] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" HandleID="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.556 [INFO][4534] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" HandleID="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77bb69bc5c-4pslb", "timestamp":"2025-11-23 23:18:48.555170575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.556 [INFO][4534] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4534] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.599 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.666 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.675 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.682 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.684 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.688 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.688 [INFO][4534] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.694 [INFO][4534] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0 Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.699 [INFO][4534] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.709 [INFO][4534] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.709 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" host="localhost" Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.709 [INFO][4534] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:48.732204 containerd[1506]: 2025-11-23 23:18:48.709 [INFO][4534] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" HandleID="k8s-pod-network.9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.712 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0", GenerateName:"calico-apiserver-77bb69bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a42a1cdf-25c9-4cc0-97f9-6bc1707a2182", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb69bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77bb69bc5c-4pslb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali754bc594873", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.713 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.713 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali754bc594873 ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.715 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.716 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0", GenerateName:"calico-apiserver-77bb69bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a42a1cdf-25c9-4cc0-97f9-6bc1707a2182", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb69bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0", Pod:"calico-apiserver-77bb69bc5c-4pslb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali754bc594873", MAC:"7e:6e:77:43:ed:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:48.734318 containerd[1506]: 2025-11-23 23:18:48.729 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-4pslb" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--4pslb-eth0" Nov 23 23:18:48.745514 containerd[1506]: time="2025-11-23T23:18:48.745465492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zjz79,Uid:6e697a16-c70a-4db1-9f7e-9b02bca06e22,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f2ee4d39e5e37c621fc87be97507bd966d10a2060fc21af77d7966031f84cad\"" Nov 23 23:18:48.748030 containerd[1506]: time="2025-11-23T23:18:48.747979370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:18:48.760555 containerd[1506]: time="2025-11-23T23:18:48.760167216Z" level=info msg="connecting to shim 9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0" address="unix:///run/containerd/s/963cbabef73f1c6f2dc195f15d0e8f7e008f085f3ee59ba6008575c4320bcb7c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:48.794214 systemd[1]: Started cri-containerd-9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0.scope - libcontainer container 9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0. Nov 23 23:18:48.805670 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:48.825971 containerd[1506]: time="2025-11-23T23:18:48.825915987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-4pslb,Uid:a42a1cdf-25c9-4cc0-97f9-6bc1707a2182,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9114df0750456266855fd4abcd6293ab8a1eea217c3cf7893350ff195affa1c0\"" Nov 23 23:18:48.919937 kubelet[2667]: E1123 23:18:48.919832 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:18:48.932612 containerd[1506]: time="2025-11-23T23:18:48.931474620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:48.934217 containerd[1506]: time="2025-11-23T23:18:48.934039621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:18:48.934217 containerd[1506]: time="2025-11-23T23:18:48.934098465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:18:48.934404 kubelet[2667]: E1123 23:18:48.934271 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:18:48.934404 kubelet[2667]: E1123 23:18:48.934302 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:18:48.934538 kubelet[2667]: E1123 23:18:48.934459 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zjz79_calico-system(6e697a16-c70a-4db1-9f7e-9b02bca06e22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:48.934538 kubelet[2667]: E1123 23:18:48.934485 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjz79" podUID="6e697a16-c70a-4db1-9f7e-9b02bca06e22" Nov 23 23:18:48.936039 containerd[1506]: time="2025-11-23T23:18:48.935297660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:18:49.142637 containerd[1506]: time="2025-11-23T23:18:49.142588884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:49.149861 containerd[1506]: time="2025-11-23T23:18:49.149788685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:18:49.152179 containerd[1506]: time="2025-11-23T23:18:49.149759883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:18:49.152237 kubelet[2667]: E1123 23:18:49.150248 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:18:49.152237 kubelet[2667]: E1123 23:18:49.150291 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:18:49.152237 kubelet[2667]: E1123 23:18:49.150358 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-77bb69bc5c-4pslb_calico-apiserver(a42a1cdf-25c9-4cc0-97f9-6bc1707a2182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:49.152237 kubelet[2667]: E1123 23:18:49.150407 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" podUID="a42a1cdf-25c9-4cc0-97f9-6bc1707a2182" Nov 23 23:18:49.475423 containerd[1506]: time="2025-11-23T23:18:49.474877922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8l7s,Uid:be86530a-9db5-4cbc-a5f6-d5851a9dfe01,Namespace:kube-system,Attempt:0,}" Nov 23 23:18:49.611215 systemd-networkd[1437]: calida392443a75: Link UP Nov 23 23:18:49.611398 systemd-networkd[1437]: calida392443a75: Gained carrier Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.524 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--b8l7s-eth0 coredns-66bc5c9577- kube-system be86530a-9db5-4cbc-a5f6-d5851a9dfe01 793 0 2025-11-23 23:18:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-b8l7s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida392443a75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.525 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.558 [INFO][4679] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" HandleID="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Workload="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.558 [INFO][4679] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" HandleID="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Workload="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-b8l7s", "timestamp":"2025-11-23 23:18:49.558369427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.558 [INFO][4679] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.558 [INFO][4679] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.558 [INFO][4679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.570 [INFO][4679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.582 [INFO][4679] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.586 [INFO][4679] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.588 [INFO][4679] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.591 [INFO][4679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.591 [INFO][4679] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.592 [INFO][4679] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.596 [INFO][4679] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.603 [INFO][4679] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.603 [INFO][4679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" host="localhost" Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.603 [INFO][4679] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:49.634973 containerd[1506]: 2025-11-23 23:18:49.604 [INFO][4679] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" HandleID="k8s-pod-network.9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Workload="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.608 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--b8l7s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"be86530a-9db5-4cbc-a5f6-d5851a9dfe01", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-b8l7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida392443a75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.608 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.608 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida392443a75 ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.611 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.611 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--b8l7s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"be86530a-9db5-4cbc-a5f6-d5851a9dfe01", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba", Pod:"coredns-66bc5c9577-b8l7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida392443a75", MAC:"7e:fa:02:86:cc:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:49.635714 containerd[1506]: 2025-11-23 23:18:49.629 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" Namespace="kube-system" Pod="coredns-66bc5c9577-b8l7s" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--b8l7s-eth0" Nov 23 23:18:49.653735 containerd[1506]: time="2025-11-23T23:18:49.653684455Z" level=info msg="connecting to shim 9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba" address="unix:///run/containerd/s/5202116b3ae54424d1789e2638f722801f00d785009f2b9ae17b20e9a69dd9e0" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:49.692256 systemd[1]: Started cri-containerd-9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba.scope - libcontainer container 9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba. Nov 23 23:18:49.703838 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:49.725276 containerd[1506]: time="2025-11-23T23:18:49.725170386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b8l7s,Uid:be86530a-9db5-4cbc-a5f6-d5851a9dfe01,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba\"" Nov 23 23:18:49.729556 containerd[1506]: time="2025-11-23T23:18:49.729470089Z" level=info msg="CreateContainer within sandbox \"9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:18:49.738028 containerd[1506]: time="2025-11-23T23:18:49.737584585Z" level=info msg="Container 7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:18:49.746462 containerd[1506]: time="2025-11-23T23:18:49.746420765Z" level=info msg="CreateContainer within sandbox \"9f2a8e8244202829f723020b6c19dfa295ed5dc3385b8d3bfbc69141239d6cba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e\"" Nov 23 23:18:49.746962 containerd[1506]: time="2025-11-23T23:18:49.746938877Z" level=info msg="StartContainer for \"7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e\"" Nov 23 23:18:49.747767 containerd[1506]: time="2025-11-23T23:18:49.747742566Z" level=info msg="connecting to shim 7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e" address="unix:///run/containerd/s/5202116b3ae54424d1789e2638f722801f00d785009f2b9ae17b20e9a69dd9e0" protocol=ttrpc version=3 Nov 23 23:18:49.772226 systemd[1]: Started cri-containerd-7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e.scope - libcontainer container 7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e. Nov 23 23:18:49.800295 containerd[1506]: time="2025-11-23T23:18:49.799925037Z" level=info msg="StartContainer for \"7e5e3e0313bd4a24558a04a1d19f734be010c965be58f961ba427a8c2262564e\" returns successfully" Nov 23 23:18:49.924808 kubelet[2667]: E1123 23:18:49.924746 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjz79" podUID="6e697a16-c70a-4db1-9f7e-9b02bca06e22" Nov 23 23:18:49.925268 kubelet[2667]: E1123 23:18:49.925073 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" podUID="a42a1cdf-25c9-4cc0-97f9-6bc1707a2182" Nov 23 23:18:49.961947 kubelet[2667]: I1123 23:18:49.961869 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b8l7s" podStartSLOduration=36.961852178 podStartE2EDuration="36.961852178s" podCreationTimestamp="2025-11-23 23:18:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:18:49.955246574 +0000 UTC m=+42.582468357" watchObservedRunningTime="2025-11-23 23:18:49.961852178 +0000 UTC m=+42.589073961" Nov 23 23:18:50.267488 systemd-networkd[1437]: cali8770b9b30e1: Gained IPv6LL Nov 23 23:18:50.459171 systemd-networkd[1437]: cali754bc594873: Gained IPv6LL Nov 23 23:18:50.473181 containerd[1506]: time="2025-11-23T23:18:50.473128146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-r2rtw,Uid:0cf0318d-6de7-4a09-88fe-94af28c80c28,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:18:50.474708 containerd[1506]: time="2025-11-23T23:18:50.474597033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlht4,Uid:a03beb28-b117-427c-9d7b-d6307465538e,Namespace:calico-system,Attempt:0,}" Nov 23 23:18:50.595785 systemd-networkd[1437]: cali1195be28d02: Link UP Nov 23 23:18:50.595959 systemd-networkd[1437]: cali1195be28d02: Gained carrier Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.519 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0 calico-apiserver-77bb69bc5c- calico-apiserver 0cf0318d-6de7-4a09-88fe-94af28c80c28 799 0 2025-11-23 23:18:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bb69bc5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77bb69bc5c-r2rtw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1195be28d02 [] [] }} ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.519 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.547 [INFO][4803] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" HandleID="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.547 [INFO][4803] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" HandleID="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77bb69bc5c-r2rtw", "timestamp":"2025-11-23 23:18:50.547536853 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.547 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.548 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.548 [INFO][4803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.558 [INFO][4803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.564 [INFO][4803] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.572 [INFO][4803] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.574 [INFO][4803] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.576 [INFO][4803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.576 [INFO][4803] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.578 [INFO][4803] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.582 [INFO][4803] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.587 [INFO][4803] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.587 [INFO][4803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" host="localhost" Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.587 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:50.610676 containerd[1506]: 2025-11-23 23:18:50.587 [INFO][4803] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" HandleID="k8s-pod-network.75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Workload="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.591 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0", GenerateName:"calico-apiserver-77bb69bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cf0318d-6de7-4a09-88fe-94af28c80c28", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb69bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77bb69bc5c-r2rtw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1195be28d02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.591 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.591 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1195be28d02 ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.595 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.595 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0", GenerateName:"calico-apiserver-77bb69bc5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cf0318d-6de7-4a09-88fe-94af28c80c28", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb69bc5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb", Pod:"calico-apiserver-77bb69bc5c-r2rtw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1195be28d02", MAC:"76:b0:80:39:6a:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:50.611295 containerd[1506]: 2025-11-23 23:18:50.606 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb69bc5c-r2rtw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77bb69bc5c--r2rtw-eth0" Nov 23 23:18:50.640068 containerd[1506]: time="2025-11-23T23:18:50.639566730Z" level=info msg="connecting to shim 75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb" address="unix:///run/containerd/s/eded9b2e14f5b063594d00649c8280a689d99bb9d77f47da58c9194001a0b0b6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:50.664342 systemd[1]: Started cri-containerd-75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb.scope - libcontainer container 75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb. Nov 23 23:18:50.679028 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:50.704160 systemd-networkd[1437]: calid6bc7b8dd2d: Link UP Nov 23 23:18:50.704572 systemd-networkd[1437]: calid6bc7b8dd2d: Gained carrier Nov 23 23:18:50.710724 containerd[1506]: time="2025-11-23T23:18:50.710666041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb69bc5c-r2rtw,Uid:0cf0318d-6de7-4a09-88fe-94af28c80c28,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"75776be37293203c473ddc76a12b2e7f7ebdd75d7248a4b98c20c86bd7cf63fb\"" Nov 23 23:18:50.712655 containerd[1506]: time="2025-11-23T23:18:50.712606796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.522 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mlht4-eth0 csi-node-driver- calico-system a03beb28-b117-427c-9d7b-d6307465538e 697 0 2025-11-23 23:18:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mlht4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid6bc7b8dd2d [] [] }} ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.522 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.555 [INFO][4809] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" HandleID="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Workload="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.555 [INFO][4809] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" HandleID="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Workload="localhost-k8s-csi--node--driver--mlht4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000580a60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mlht4", "timestamp":"2025-11-23 23:18:50.555322997 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.555 [INFO][4809] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.587 [INFO][4809] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.588 [INFO][4809] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.659 [INFO][4809] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.665 [INFO][4809] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.673 [INFO][4809] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.678 [INFO][4809] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.681 [INFO][4809] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.681 [INFO][4809] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.683 [INFO][4809] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7 Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.687 [INFO][4809] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.698 [INFO][4809] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.698 [INFO][4809] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" host="localhost" Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.698 [INFO][4809] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:18:50.721835 containerd[1506]: 2025-11-23 23:18:50.698 [INFO][4809] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" HandleID="k8s-pod-network.81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Workload="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.701 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mlht4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a03beb28-b117-427c-9d7b-d6307465538e", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mlht4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6bc7b8dd2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.702 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.702 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6bc7b8dd2d ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.705 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.705 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mlht4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a03beb28-b117-427c-9d7b-d6307465538e", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 18, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7", Pod:"csi-node-driver-mlht4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6bc7b8dd2d", MAC:"16:0f:48:64:3c:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:18:50.722530 containerd[1506]: 2025-11-23 23:18:50.718 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" Namespace="calico-system" Pod="csi-node-driver-mlht4" WorkloadEndpoint="localhost-k8s-csi--node--driver--mlht4-eth0" Nov 23 23:18:50.748581 containerd[1506]: time="2025-11-23T23:18:50.748539374Z" level=info msg="connecting to shim 81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7" address="unix:///run/containerd/s/db1bf41c1764c01835cfd233def33975690e816c9a4a0831dbaaaddab32e08c5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:18:50.777294 systemd[1]: Started cri-containerd-81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7.scope - libcontainer container 81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7. Nov 23 23:18:50.790335 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:18:50.813667 containerd[1506]: time="2025-11-23T23:18:50.813627327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mlht4,Uid:a03beb28-b117-427c-9d7b-d6307465538e,Namespace:calico-system,Attempt:0,} returns sandbox id \"81e30cc486e4150cc29d70db8d1f388ab160a389b137a3eb46b8e1c59ddac9d7\"" Nov 23 23:18:50.843229 systemd-networkd[1437]: calida392443a75: Gained IPv6LL Nov 23 23:18:50.921553 containerd[1506]: time="2025-11-23T23:18:50.921490146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:50.922447 containerd[1506]: time="2025-11-23T23:18:50.922401600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:18:50.922569 containerd[1506]: time="2025-11-23T23:18:50.922505566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:18:50.922706 kubelet[2667]: E1123 23:18:50.922639 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:18:50.922706 kubelet[2667]: E1123 23:18:50.922703 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:18:50.922968 kubelet[2667]: E1123 23:18:50.922934 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-77bb69bc5c-r2rtw_calico-apiserver(0cf0318d-6de7-4a09-88fe-94af28c80c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:50.923145 kubelet[2667]: E1123 23:18:50.923101 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:18:50.928823 kubelet[2667]: E1123 23:18:50.928761 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:18:50.932119 containerd[1506]: time="2025-11-23T23:18:50.932059935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:18:51.151045 containerd[1506]: time="2025-11-23T23:18:51.150929961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:51.151951 containerd[1506]: time="2025-11-23T23:18:51.151916898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:18:51.152081 containerd[1506]: time="2025-11-23T23:18:51.151984982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:18:51.152178 kubelet[2667]: E1123 23:18:51.152141 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:18:51.152235 kubelet[2667]: E1123 23:18:51.152184 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:18:51.152315 kubelet[2667]: E1123 23:18:51.152265 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:51.153532 containerd[1506]: time="2025-11-23T23:18:51.153471268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:18:51.358710 containerd[1506]: time="2025-11-23T23:18:51.358575188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:51.359713 containerd[1506]: time="2025-11-23T23:18:51.359647490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:18:51.359772 containerd[1506]: time="2025-11-23T23:18:51.359734815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:18:51.359931 kubelet[2667]: E1123 23:18:51.359892 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:18:51.359982 kubelet[2667]: E1123 23:18:51.359939 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:18:51.360060 kubelet[2667]: E1123 23:18:51.360037 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:51.360137 kubelet[2667]: E1123 23:18:51.360085 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:51.932859 kubelet[2667]: E1123 23:18:51.932754 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:18:51.932859 kubelet[2667]: E1123 23:18:51.932787 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:18:51.995186 systemd-networkd[1437]: cali1195be28d02: Gained IPv6LL Nov 23 23:18:52.699183 systemd-networkd[1437]: calid6bc7b8dd2d: Gained IPv6LL Nov 23 23:18:52.974261 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:48746.service - OpenSSH per-connection server daemon (10.0.0.1:48746). Nov 23 23:18:53.048831 sshd[4941]: Accepted publickey for core from 10.0.0.1 port 48746 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:18:53.051235 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:18:53.057078 systemd-logind[1486]: New session 9 of user core. Nov 23 23:18:53.067194 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:18:53.248164 sshd[4944]: Connection closed by 10.0.0.1 port 48746 Nov 23 23:18:53.248695 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:53.254946 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:48746.service: Deactivated successfully. Nov 23 23:18:53.258139 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:18:53.260549 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:18:53.265663 systemd-logind[1486]: Removed session 9. Nov 23 23:18:57.474285 containerd[1506]: time="2025-11-23T23:18:57.473768779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:18:57.658018 containerd[1506]: time="2025-11-23T23:18:57.657777947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:57.659131 containerd[1506]: time="2025-11-23T23:18:57.659065210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:18:57.659832 containerd[1506]: time="2025-11-23T23:18:57.659175616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:18:57.659869 kubelet[2667]: E1123 23:18:57.659418 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:18:57.659869 kubelet[2667]: E1123 23:18:57.659493 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:18:57.659869 kubelet[2667]: E1123 23:18:57.659585 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dfffb7d85-g57l4_calico-system(1911fb13-5d38-47f5-98a7-ab4a295196c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:57.661120 containerd[1506]: time="2025-11-23T23:18:57.661043788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:18:57.874011 containerd[1506]: time="2025-11-23T23:18:57.873963823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:18:57.886169 containerd[1506]: time="2025-11-23T23:18:57.886101143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:18:57.886321 containerd[1506]: time="2025-11-23T23:18:57.886203308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:18:57.886849 kubelet[2667]: E1123 23:18:57.886408 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:18:57.886849 kubelet[2667]: E1123 23:18:57.886457 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:18:57.886849 kubelet[2667]: E1123 23:18:57.886534 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dfffb7d85-g57l4_calico-system(1911fb13-5d38-47f5-98a7-ab4a295196c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:18:57.886966 kubelet[2667]: E1123 23:18:57.886578 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dfffb7d85-g57l4" podUID="1911fb13-5d38-47f5-98a7-ab4a295196c4" Nov 23 23:18:58.263485 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:48762.service - OpenSSH per-connection server daemon (10.0.0.1:48762). Nov 23 23:18:58.327443 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 48762 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:18:58.328784 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:18:58.333094 systemd-logind[1486]: New session 10 of user core. Nov 23 23:18:58.342285 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:18:58.519115 sshd[4974]: Connection closed by 10.0.0.1 port 48762 Nov 23 23:18:58.519224 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:58.528669 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:48762.service: Deactivated successfully. Nov 23 23:18:58.530435 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:18:58.531301 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:18:58.534213 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:48774.service - OpenSSH per-connection server daemon (10.0.0.1:48774). Nov 23 23:18:58.534877 systemd-logind[1486]: Removed session 10. Nov 23 23:18:58.590951 sshd[4988]: Accepted publickey for core from 10.0.0.1 port 48774 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:18:58.592455 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:18:58.600368 systemd-logind[1486]: New session 11 of user core. Nov 23 23:18:58.613322 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:18:58.787110 sshd[4991]: Connection closed by 10.0.0.1 port 48774 Nov 23 23:18:58.787584 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:58.797932 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:48774.service: Deactivated successfully. Nov 23 23:18:58.800554 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:18:58.802049 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:18:58.806042 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:48782.service - OpenSSH per-connection server daemon (10.0.0.1:48782). Nov 23 23:18:58.807328 systemd-logind[1486]: Removed session 11. Nov 23 23:18:58.865016 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 48782 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:18:58.866402 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:18:58.870399 systemd-logind[1486]: New session 12 of user core. Nov 23 23:18:58.879264 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:18:59.064641 sshd[5005]: Connection closed by 10.0.0.1 port 48782 Nov 23 23:18:59.065193 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Nov 23 23:18:59.073334 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:48782.service: Deactivated successfully. Nov 23 23:18:59.078663 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:18:59.081425 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:18:59.083578 systemd-logind[1486]: Removed session 12. Nov 23 23:19:00.472214 containerd[1506]: time="2025-11-23T23:19:00.472155225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:19:00.698856 containerd[1506]: time="2025-11-23T23:19:00.698800581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:00.700635 containerd[1506]: time="2025-11-23T23:19:00.700522260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:19:00.700635 containerd[1506]: time="2025-11-23T23:19:00.700599663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:19:00.700821 kubelet[2667]: E1123 23:19:00.700768 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:19:00.700821 kubelet[2667]: E1123 23:19:00.700816 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:19:00.701176 kubelet[2667]: E1123 23:19:00.700888 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zjz79_calico-system(6e697a16-c70a-4db1-9f7e-9b02bca06e22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:00.701176 kubelet[2667]: E1123 23:19:00.700919 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjz79" podUID="6e697a16-c70a-4db1-9f7e-9b02bca06e22" Nov 23 23:19:01.472102 containerd[1506]: time="2025-11-23T23:19:01.471810877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:19:01.683982 containerd[1506]: time="2025-11-23T23:19:01.683928964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:01.684912 containerd[1506]: time="2025-11-23T23:19:01.684866606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:19:01.684959 containerd[1506]: time="2025-11-23T23:19:01.684937089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:19:01.685174 kubelet[2667]: E1123 23:19:01.685135 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:19:01.685225 kubelet[2667]: E1123 23:19:01.685185 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:19:01.685280 kubelet[2667]: E1123 23:19:01.685258 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-77bb69bc5c-4pslb_calico-apiserver(a42a1cdf-25c9-4cc0-97f9-6bc1707a2182): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:01.686188 kubelet[2667]: E1123 23:19:01.685296 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" podUID="a42a1cdf-25c9-4cc0-97f9-6bc1707a2182" Nov 23 23:19:02.472766 containerd[1506]: time="2025-11-23T23:19:02.471147898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:19:02.691750 containerd[1506]: time="2025-11-23T23:19:02.691689114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:02.692693 containerd[1506]: time="2025-11-23T23:19:02.692661916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:19:02.692743 containerd[1506]: time="2025-11-23T23:19:02.692699357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:19:02.692942 kubelet[2667]: E1123 23:19:02.692897 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:19:02.693676 kubelet[2667]: E1123 23:19:02.692952 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:19:02.693676 kubelet[2667]: E1123 23:19:02.693045 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-664f9fcd4f-9fzcc_calico-system(99271428-091e-451d-a8a9-b200a082e2d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:02.693676 kubelet[2667]: E1123 23:19:02.693083 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:19:04.083920 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:36418.service - OpenSSH per-connection server daemon (10.0.0.1:36418). Nov 23 23:19:04.142428 sshd[5022]: Accepted publickey for core from 10.0.0.1 port 36418 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:04.143722 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:04.150647 systemd-logind[1486]: New session 13 of user core. Nov 23 23:19:04.156235 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:19:04.306693 sshd[5025]: Connection closed by 10.0.0.1 port 36418 Nov 23 23:19:04.307555 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:04.318595 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:36418.service: Deactivated successfully. Nov 23 23:19:04.321785 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:19:04.322834 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:19:04.325323 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:36430.service - OpenSSH per-connection server daemon (10.0.0.1:36430). Nov 23 23:19:04.327495 systemd-logind[1486]: Removed session 13. Nov 23 23:19:04.384596 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 36430 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:04.386067 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:04.393581 systemd-logind[1486]: New session 14 of user core. Nov 23 23:19:04.403218 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:19:04.473255 containerd[1506]: time="2025-11-23T23:19:04.473198170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:19:04.628322 sshd[5041]: Connection closed by 10.0.0.1 port 36430 Nov 23 23:19:04.628133 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:04.639948 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:36430.service: Deactivated successfully. Nov 23 23:19:04.642591 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:19:04.643799 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:19:04.647317 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:36438.service - OpenSSH per-connection server daemon (10.0.0.1:36438). Nov 23 23:19:04.649511 systemd-logind[1486]: Removed session 14. Nov 23 23:19:04.684305 containerd[1506]: time="2025-11-23T23:19:04.684252203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:04.713434 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 36438 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:04.714927 containerd[1506]: time="2025-11-23T23:19:04.714844746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:19:04.715066 containerd[1506]: time="2025-11-23T23:19:04.714907788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:19:04.715246 kubelet[2667]: E1123 23:19:04.715188 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:19:04.715246 kubelet[2667]: E1123 23:19:04.715237 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:19:04.715543 kubelet[2667]: E1123 23:19:04.715312 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:04.716671 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:04.717603 containerd[1506]: time="2025-11-23T23:19:04.716955713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:19:04.721892 systemd-logind[1486]: New session 15 of user core. Nov 23 23:19:04.732214 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:19:04.981457 containerd[1506]: time="2025-11-23T23:19:04.981334267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:04.982560 containerd[1506]: time="2025-11-23T23:19:04.982498115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:19:04.982689 containerd[1506]: time="2025-11-23T23:19:04.982591959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:19:04.982775 kubelet[2667]: E1123 23:19:04.982735 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:19:04.982881 kubelet[2667]: E1123 23:19:04.982786 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:19:04.983512 kubelet[2667]: E1123 23:19:04.982876 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-mlht4_calico-system(a03beb28-b117-427c-9d7b-d6307465538e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:04.983512 kubelet[2667]: E1123 23:19:04.982919 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mlht4" podUID="a03beb28-b117-427c-9d7b-d6307465538e" Nov 23 23:19:05.296039 sshd[5057]: Connection closed by 10.0.0.1 port 36438 Nov 23 23:19:05.296828 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:05.305544 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:36438.service: Deactivated successfully. Nov 23 23:19:05.308927 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:19:05.311990 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:19:05.315343 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:36448.service - OpenSSH per-connection server daemon (10.0.0.1:36448). Nov 23 23:19:05.319063 systemd-logind[1486]: Removed session 15. Nov 23 23:19:05.382823 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 36448 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:05.384344 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:05.388706 systemd-logind[1486]: New session 16 of user core. Nov 23 23:19:05.399256 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:19:05.472703 containerd[1506]: time="2025-11-23T23:19:05.472601988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:19:05.691927 containerd[1506]: time="2025-11-23T23:19:05.691878857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:19:05.692905 containerd[1506]: time="2025-11-23T23:19:05.692864457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:19:05.692966 containerd[1506]: time="2025-11-23T23:19:05.692958621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:19:05.693272 kubelet[2667]: E1123 23:19:05.693201 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:19:05.693334 kubelet[2667]: E1123 23:19:05.693287 2667 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:19:05.693403 kubelet[2667]: E1123 23:19:05.693381 2667 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-77bb69bc5c-r2rtw_calico-apiserver(0cf0318d-6de7-4a09-88fe-94af28c80c28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:19:05.693797 kubelet[2667]: E1123 23:19:05.693765 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:19:05.722718 sshd[5079]: Connection closed by 10.0.0.1 port 36448 Nov 23 23:19:05.723118 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:05.737333 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:36448.service: Deactivated successfully. Nov 23 23:19:05.741975 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:19:05.743757 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:19:05.746726 systemd-logind[1486]: Removed session 16. Nov 23 23:19:05.749215 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:36450.service - OpenSSH per-connection server daemon (10.0.0.1:36450). Nov 23 23:19:05.827690 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 36450 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:05.830306 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:05.836595 systemd-logind[1486]: New session 17 of user core. Nov 23 23:19:05.856219 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:19:05.997900 sshd[5094]: Connection closed by 10.0.0.1 port 36450 Nov 23 23:19:05.997125 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:06.001779 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:36450.service: Deactivated successfully. Nov 23 23:19:06.003997 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:19:06.005811 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:19:06.008093 systemd-logind[1486]: Removed session 17. Nov 23 23:19:09.473687 kubelet[2667]: E1123 23:19:09.473474 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dfffb7d85-g57l4" podUID="1911fb13-5d38-47f5-98a7-ab4a295196c4" Nov 23 23:19:11.013800 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:43248.service - OpenSSH per-connection server daemon (10.0.0.1:43248). Nov 23 23:19:11.079732 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 43248 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:11.081968 sshd-session[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:11.087062 systemd-logind[1486]: New session 18 of user core. Nov 23 23:19:11.098248 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:19:11.227519 sshd[5125]: Connection closed by 10.0.0.1 port 43248 Nov 23 23:19:11.227359 sshd-session[5122]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:11.231080 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:43248.service: Deactivated successfully. Nov 23 23:19:11.232829 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:19:11.233470 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:19:11.234476 systemd-logind[1486]: Removed session 18. Nov 23 23:19:13.471033 kubelet[2667]: E1123 23:19:13.470892 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-4pslb" podUID="a42a1cdf-25c9-4cc0-97f9-6bc1707a2182" Nov 23 23:19:14.472588 kubelet[2667]: E1123 23:19:14.472378 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-664f9fcd4f-9fzcc" podUID="99271428-091e-451d-a8a9-b200a082e2d3" Nov 23 23:19:15.472272 kubelet[2667]: E1123 23:19:15.472219 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zjz79" podUID="6e697a16-c70a-4db1-9f7e-9b02bca06e22" Nov 23 23:19:16.240127 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:43262.service - OpenSSH per-connection server daemon (10.0.0.1:43262). Nov 23 23:19:16.315494 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 43262 ssh2: RSA SHA256:xK0odXIrRLy2uvFTHd2XiQ92YaTCLtqdWVOOXxQURNk Nov 23 23:19:16.317532 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:19:16.322075 systemd-logind[1486]: New session 19 of user core. Nov 23 23:19:16.332720 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:19:16.468370 sshd[5168]: Connection closed by 10.0.0.1 port 43262 Nov 23 23:19:16.468551 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Nov 23 23:19:16.471370 kubelet[2667]: E1123 23:19:16.471156 2667 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb69bc5c-r2rtw" podUID="0cf0318d-6de7-4a09-88fe-94af28c80c28" Nov 23 23:19:16.477074 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:43262.service: Deactivated successfully. Nov 23 23:19:16.481779 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:19:16.484277 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:19:16.488108 systemd-logind[1486]: Removed session 19.