Sep 9 23:51:08.789714 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:51:08.789738 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:51:08.789748 kernel: KASLR enabled Sep 9 23:51:08.789754 kernel: efi: EFI v2.7 by EDK II Sep 9 23:51:08.789760 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 23:51:08.789765 kernel: random: crng init done Sep 9 23:51:08.789772 kernel: secureboot: Secure boot disabled Sep 9 23:51:08.789778 kernel: ACPI: Early table checksum verification disabled Sep 9 23:51:08.789783 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 23:51:08.789791 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:51:08.789797 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789803 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789808 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789814 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789822 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789829 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789836 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789842 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789848 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:08.789854 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:51:08.789860 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:51:08.789867 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:08.789873 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 23:51:08.789879 kernel: Zone ranges: Sep 9 23:51:08.789885 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:08.789892 kernel: DMA32 empty Sep 9 23:51:08.789899 kernel: Normal empty Sep 9 23:51:08.789904 kernel: Device empty Sep 9 23:51:08.789910 kernel: Movable zone start for each node Sep 9 23:51:08.789917 kernel: Early memory node ranges Sep 9 23:51:08.789923 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 23:51:08.789929 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 23:51:08.789935 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 23:51:08.789941 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 23:51:08.789947 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 23:51:08.789953 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 23:51:08.789960 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 23:51:08.789967 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 23:51:08.789973 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 23:51:08.789979 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:51:08.789988 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:51:08.789995 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:51:08.790001 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:51:08.790009 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:08.790016 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:51:08.790022 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 23:51:08.790029 kernel: psci: probing for conduit method from ACPI. Sep 9 23:51:08.790035 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:51:08.790042 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:51:08.790048 kernel: psci: Trusted OS migration not required Sep 9 23:51:08.790055 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:51:08.790061 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:51:08.790068 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:51:08.790075 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:51:08.790082 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:51:08.790089 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:51:08.790095 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:51:08.790102 kernel: CPU features: detected: Spectre-v4 Sep 9 23:51:08.790115 kernel: CPU features: detected: Spectre-BHB Sep 9 23:51:08.790122 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:51:08.790128 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:51:08.790135 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:51:08.790141 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:51:08.790148 kernel: alternatives: applying boot alternatives Sep 9 23:51:08.790165 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:51:08.790174 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:51:08.790180 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:51:08.790187 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:51:08.790193 kernel: Fallback order for Node 0: 0 Sep 9 23:51:08.790200 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:51:08.790206 kernel: Policy zone: DMA Sep 9 23:51:08.790213 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:51:08.790219 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:51:08.790226 kernel: software IO TLB: area num 4. Sep 9 23:51:08.790232 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:51:08.790238 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 23:51:08.790246 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:51:08.790253 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:51:08.790261 kernel: rcu: RCU event tracing is enabled. Sep 9 23:51:08.790267 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:51:08.790274 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:51:08.790290 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:51:08.790297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:51:08.790304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:51:08.790310 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:51:08.790317 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:51:08.790324 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:51:08.790332 kernel: GICv3: 256 SPIs implemented Sep 9 23:51:08.790339 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:51:08.790345 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:51:08.790352 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:51:08.790358 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:51:08.790365 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:51:08.790371 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:51:08.790378 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:51:08.790385 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:51:08.790391 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:51:08.790398 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:51:08.790405 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:51:08.790413 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:08.790419 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:51:08.790426 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:51:08.790433 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:51:08.790440 kernel: arm-pv: using stolen time PV Sep 9 23:51:08.790447 kernel: Console: colour dummy device 80x25 Sep 9 23:51:08.790454 kernel: ACPI: Core revision 20240827 Sep 9 23:51:08.790461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:51:08.790468 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:51:08.790474 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:51:08.790483 kernel: landlock: Up and running. Sep 9 23:51:08.790489 kernel: SELinux: Initializing. Sep 9 23:51:08.790496 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:51:08.790503 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:51:08.790509 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:51:08.790516 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:51:08.790523 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:51:08.790530 kernel: Remapping and enabling EFI services. Sep 9 23:51:08.790537 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:51:08.790550 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:51:08.790557 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:51:08.790565 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:51:08.790573 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:08.790580 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:51:08.790587 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:51:08.790594 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:51:08.790601 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:51:08.790610 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:08.790617 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:51:08.790624 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:51:08.790631 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:51:08.790638 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:51:08.790645 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:08.790651 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:51:08.790658 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:51:08.790666 kernel: SMP: Total of 4 processors activated. Sep 9 23:51:08.790674 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:51:08.790681 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:51:08.790688 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:51:08.790696 kernel: CPU features: detected: Common not Private translations Sep 9 23:51:08.790703 kernel: CPU features: detected: CRC32 instructions Sep 9 23:51:08.790709 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:51:08.790716 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:51:08.790723 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:51:08.790730 kernel: CPU features: detected: Privileged Access Never Sep 9 23:51:08.790739 kernel: CPU features: detected: RAS Extension Support Sep 9 23:51:08.790746 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:51:08.790753 kernel: alternatives: applying system-wide alternatives Sep 9 23:51:08.790760 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:51:08.790768 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 9 23:51:08.790775 kernel: devtmpfs: initialized Sep 9 23:51:08.790782 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:51:08.790789 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:51:08.790796 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:51:08.790805 kernel: 0 pages in range for non-PLT usage Sep 9 23:51:08.790812 kernel: 508576 pages in range for PLT usage Sep 9 23:51:08.790818 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:51:08.790825 kernel: SMBIOS 3.0.0 present. Sep 9 23:51:08.790832 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:51:08.790840 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:51:08.790847 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:51:08.790854 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:51:08.790861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:51:08.790870 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:51:08.790877 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:51:08.790884 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Sep 9 23:51:08.790891 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:51:08.790898 kernel: cpuidle: using governor menu Sep 9 23:51:08.790905 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:51:08.790912 kernel: ASID allocator initialised with 32768 entries Sep 9 23:51:08.790919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:51:08.790926 kernel: Serial: AMBA PL011 UART driver Sep 9 23:51:08.790934 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:51:08.790941 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:51:08.790949 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:51:08.790956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:51:08.790963 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:51:08.790970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:51:08.790977 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:51:08.790984 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:51:08.790991 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:51:08.790999 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:51:08.791007 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:51:08.791014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:51:08.791021 kernel: ACPI: Interpreter enabled Sep 9 23:51:08.791028 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:51:08.791035 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:51:08.791042 kernel: ACPI: CPU0 has been hot-added Sep 9 23:51:08.791049 kernel: ACPI: CPU1 has been hot-added Sep 9 23:51:08.791056 kernel: ACPI: CPU2 has been hot-added Sep 9 23:51:08.791063 kernel: ACPI: CPU3 has been hot-added Sep 9 23:51:08.791072 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:51:08.791079 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:51:08.791086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:51:08.791237 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:51:08.791328 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:51:08.791391 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:51:08.791451 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:51:08.791513 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:51:08.791522 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:51:08.791530 kernel: PCI host bridge to bus 0000:00 Sep 9 23:51:08.791597 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:51:08.791657 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:51:08.791710 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:51:08.791763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:51:08.791855 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:51:08.791927 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:51:08.791996 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:51:08.792059 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:51:08.792121 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:51:08.792191 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:51:08.792254 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:51:08.792332 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:51:08.792388 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:51:08.792443 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:51:08.792497 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:51:08.792506 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:51:08.792514 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:51:08.792521 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:51:08.792531 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:51:08.792538 kernel: iommu: Default domain type: Translated Sep 9 23:51:08.792545 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:51:08.792553 kernel: efivars: Registered efivars operations Sep 9 23:51:08.792560 kernel: vgaarb: loaded Sep 9 23:51:08.792567 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:51:08.792574 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:51:08.792581 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:51:08.792588 kernel: pnp: PnP ACPI init Sep 9 23:51:08.792657 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:51:08.792668 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:51:08.792675 kernel: NET: Registered PF_INET protocol family Sep 9 23:51:08.792683 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:51:08.792690 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:51:08.792698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:51:08.792705 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:51:08.792712 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:51:08.792721 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:51:08.792728 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:51:08.792735 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:51:08.792742 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:51:08.792750 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:51:08.792757 kernel: kvm [1]: HYP mode not available Sep 9 23:51:08.792764 kernel: Initialise system trusted keyrings Sep 9 23:51:08.792771 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:51:08.792778 kernel: Key type asymmetric registered Sep 9 23:51:08.792787 kernel: Asymmetric key parser 'x509' registered Sep 9 23:51:08.792794 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:51:08.792801 kernel: io scheduler mq-deadline registered Sep 9 23:51:08.792808 kernel: io scheduler kyber registered Sep 9 23:51:08.792815 kernel: io scheduler bfq registered Sep 9 23:51:08.792822 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:51:08.792830 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:51:08.792837 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:51:08.792896 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:51:08.792907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:51:08.792914 kernel: thunder_xcv, ver 1.0 Sep 9 23:51:08.792921 kernel: thunder_bgx, ver 1.0 Sep 9 23:51:08.792928 kernel: nicpf, ver 1.0 Sep 9 23:51:08.792935 kernel: nicvf, ver 1.0 Sep 9 23:51:08.793004 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:51:08.793105 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:51:08 UTC (1757461868) Sep 9 23:51:08.793114 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:51:08.793122 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:51:08.793131 kernel: watchdog: NMI not fully supported Sep 9 23:51:08.793138 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:51:08.793146 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:51:08.793160 kernel: Segment Routing with IPv6 Sep 9 23:51:08.793168 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:51:08.793175 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:51:08.793182 kernel: Key type dns_resolver registered Sep 9 23:51:08.793189 kernel: registered taskstats version 1 Sep 9 23:51:08.793196 kernel: Loading compiled-in X.509 certificates Sep 9 23:51:08.793205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:51:08.793213 kernel: Demotion targets for Node 0: null Sep 9 23:51:08.793220 kernel: Key type .fscrypt registered Sep 9 23:51:08.793227 kernel: Key type fscrypt-provisioning registered Sep 9 23:51:08.793234 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:51:08.793241 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:51:08.793248 kernel: ima: No architecture policies found Sep 9 23:51:08.793255 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:51:08.793264 kernel: clk: Disabling unused clocks Sep 9 23:51:08.793271 kernel: PM: genpd: Disabling unused power domains Sep 9 23:51:08.793285 kernel: Warning: unable to open an initial console. Sep 9 23:51:08.793293 kernel: Freeing unused kernel memory: 38912K Sep 9 23:51:08.793300 kernel: Run /init as init process Sep 9 23:51:08.793307 kernel: with arguments: Sep 9 23:51:08.793314 kernel: /init Sep 9 23:51:08.793321 kernel: with environment: Sep 9 23:51:08.793328 kernel: HOME=/ Sep 9 23:51:08.793335 kernel: TERM=linux Sep 9 23:51:08.793344 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:51:08.793380 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:51:08.793403 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:51:08.793411 systemd[1]: Detected virtualization kvm. Sep 9 23:51:08.793418 systemd[1]: Detected architecture arm64. Sep 9 23:51:08.793425 systemd[1]: Running in initrd. Sep 9 23:51:08.793433 systemd[1]: No hostname configured, using default hostname. Sep 9 23:51:08.793443 systemd[1]: Hostname set to . Sep 9 23:51:08.793450 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:51:08.793458 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:51:08.793465 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:08.793473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:08.793481 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:51:08.793488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:51:08.793496 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:51:08.793506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:51:08.793515 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:51:08.793522 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:51:08.793530 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:08.793537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:08.793545 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:51:08.793553 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:51:08.793562 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:51:08.793569 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:51:08.793577 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:51:08.793585 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:51:08.793592 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:51:08.793600 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:51:08.793608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:08.793616 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:08.793625 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:08.793633 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:51:08.793640 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:51:08.793648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:51:08.793656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:51:08.793664 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:51:08.793671 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:51:08.793679 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:51:08.793686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:51:08.793696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:08.793703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:51:08.793711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:08.793719 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:51:08.793728 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:51:08.793755 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 23:51:08.793776 systemd-journald[244]: Journal started Sep 9 23:51:08.793796 systemd-journald[244]: Runtime Journal (/run/log/journal/e0e1741b99cd42c788dcee98e75a50a2) is 6M, max 48.5M, 42.4M free. Sep 9 23:51:08.782199 systemd-modules-load[246]: Inserted module 'overlay' Sep 9 23:51:08.795543 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:51:08.798187 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:51:08.804442 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:51:08.804467 kernel: Bridge firewalling registered Sep 9 23:51:08.802548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:08.802855 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 9 23:51:08.814420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:08.815867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:51:08.820399 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:51:08.820633 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:51:08.823424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:51:08.843126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:51:08.844697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:08.852840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:08.857049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:51:08.858323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:08.872467 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:51:08.875438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:51:08.899950 systemd-resolved[284]: Positive Trust Anchors: Sep 9 23:51:08.899971 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:51:08.900002 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:51:08.904965 systemd-resolved[284]: Defaulting to hostname 'linux'. Sep 9 23:51:08.908848 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:51:08.906063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:51:08.908254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:08.982323 kernel: SCSI subsystem initialized Sep 9 23:51:08.986302 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:51:08.995320 kernel: iscsi: registered transport (tcp) Sep 9 23:51:09.008493 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:51:09.008558 kernel: QLogic iSCSI HBA Driver Sep 9 23:51:09.026630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:51:09.050385 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:09.051675 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:51:09.103501 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:51:09.107185 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:51:09.174327 kernel: raid6: neonx8 gen() 13404 MB/s Sep 9 23:51:09.191305 kernel: raid6: neonx4 gen() 15481 MB/s Sep 9 23:51:09.208311 kernel: raid6: neonx2 gen() 13083 MB/s Sep 9 23:51:09.225309 kernel: raid6: neonx1 gen() 10426 MB/s Sep 9 23:51:09.242317 kernel: raid6: int64x8 gen() 6889 MB/s Sep 9 23:51:09.259313 kernel: raid6: int64x4 gen() 7331 MB/s Sep 9 23:51:09.276342 kernel: raid6: int64x2 gen() 6076 MB/s Sep 9 23:51:09.293338 kernel: raid6: int64x1 gen() 5049 MB/s Sep 9 23:51:09.293414 kernel: raid6: using algorithm neonx4 gen() 15481 MB/s Sep 9 23:51:09.310540 kernel: raid6: .... xor() 12334 MB/s, rmw enabled Sep 9 23:51:09.310610 kernel: raid6: using neon recovery algorithm Sep 9 23:51:09.315324 kernel: xor: measuring software checksum speed Sep 9 23:51:09.315389 kernel: 8regs : 19227 MB/sec Sep 9 23:51:09.316361 kernel: 32regs : 21647 MB/sec Sep 9 23:51:09.317396 kernel: arm64_neon : 28138 MB/sec Sep 9 23:51:09.317418 kernel: xor: using function: arm64_neon (28138 MB/sec) Sep 9 23:51:09.370473 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:51:09.377860 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:51:09.383345 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:09.421986 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 9 23:51:09.427527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:09.429661 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:51:09.460940 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 9 23:51:09.491579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:51:09.496418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:51:09.558908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:09.562030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:51:09.614327 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:51:09.631298 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:51:09.634801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:51:09.634927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:09.641417 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:51:09.641459 kernel: GPT:9289727 != 19775487 Sep 9 23:51:09.641470 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:51:09.641479 kernel: GPT:9289727 != 19775487 Sep 9 23:51:09.641487 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:51:09.641495 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:09.641617 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:09.643490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:09.673367 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:51:09.674820 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:51:09.676929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:09.685848 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:51:09.693768 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:51:09.704620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:51:09.705871 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:51:09.709047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:51:09.711361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:09.713232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:51:09.716078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:51:09.717987 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:51:09.734367 disk-uuid[589]: Primary Header is updated. Sep 9 23:51:09.734367 disk-uuid[589]: Secondary Entries is updated. Sep 9 23:51:09.734367 disk-uuid[589]: Secondary Header is updated. Sep 9 23:51:09.738312 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:09.739799 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:51:10.795665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:10.796207 disk-uuid[595]: The operation has completed successfully. Sep 9 23:51:10.826247 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:51:10.826364 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:51:10.855508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:51:10.879988 sh[608]: Success Sep 9 23:51:10.895989 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:51:10.896055 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:51:10.897221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:51:10.913323 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:51:10.946822 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:51:10.949416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:51:10.980544 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:51:10.985696 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (620) Sep 9 23:51:10.985720 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:51:10.985730 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:10.990328 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:51:10.990380 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:51:10.991696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:51:10.993467 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:51:10.994763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:51:10.996250 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:51:11.013972 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:51:11.030345 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (648) Sep 9 23:51:11.032666 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:11.032708 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:11.036491 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:11.036528 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:11.041313 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:11.042171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:51:11.045146 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:51:11.126323 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:51:11.132485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:51:11.174049 systemd-networkd[796]: lo: Link UP Sep 9 23:51:11.174062 systemd-networkd[796]: lo: Gained carrier Sep 9 23:51:11.174823 systemd-networkd[796]: Enumeration completed Sep 9 23:51:11.174957 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:51:11.175244 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:11.175247 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:51:11.176188 systemd-networkd[796]: eth0: Link UP Sep 9 23:51:11.176302 systemd-networkd[796]: eth0: Gained carrier Sep 9 23:51:11.176312 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:11.176951 systemd[1]: Reached target network.target - Network. Sep 9 23:51:11.196195 ignition[693]: Ignition 2.21.0 Sep 9 23:51:11.196202 ignition[693]: Stage: fetch-offline Sep 9 23:51:11.196240 ignition[693]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:11.196247 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:11.201356 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:51:11.196438 ignition[693]: parsed url from cmdline: "" Sep 9 23:51:11.196442 ignition[693]: no config URL provided Sep 9 23:51:11.196446 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:51:11.196454 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:51:11.196474 ignition[693]: op(1): [started] loading QEMU firmware config module Sep 9 23:51:11.196479 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:51:11.207006 ignition[693]: op(1): [finished] loading QEMU firmware config module Sep 9 23:51:11.257463 ignition[693]: parsing config with SHA512: b55763b2eb019a716a2f589e06e1fe2e4ea4efb0dabaeef3c0f083063d4371543a73b0ef6417792126fc931894192f6dba3f224c3177f27b60acc8dc73bf4c29 Sep 9 23:51:11.265833 unknown[693]: fetched base config from "system" Sep 9 23:51:11.265844 unknown[693]: fetched user config from "qemu" Sep 9 23:51:11.266413 ignition[693]: fetch-offline: fetch-offline passed Sep 9 23:51:11.266512 ignition[693]: Ignition finished successfully Sep 9 23:51:11.269292 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:51:11.271337 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:51:11.272332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:51:11.313136 ignition[809]: Ignition 2.21.0 Sep 9 23:51:11.313165 ignition[809]: Stage: kargs Sep 9 23:51:11.313338 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:11.313347 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:11.317391 ignition[809]: kargs: kargs passed Sep 9 23:51:11.317454 ignition[809]: Ignition finished successfully Sep 9 23:51:11.319736 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:51:11.321867 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:51:11.357237 ignition[817]: Ignition 2.21.0 Sep 9 23:51:11.357248 ignition[817]: Stage: disks Sep 9 23:51:11.357435 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:11.357444 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:11.361319 ignition[817]: disks: disks passed Sep 9 23:51:11.362006 ignition[817]: Ignition finished successfully Sep 9 23:51:11.364461 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:51:11.366644 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:51:11.367760 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:51:11.369710 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:51:11.371489 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:51:11.373065 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:51:11.375600 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:51:11.403344 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:51:11.410254 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:51:11.413170 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:51:11.481313 kernel: EXT4-fs (vda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:51:11.481660 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:51:11.482931 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:51:11.485331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:51:11.487027 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:51:11.488102 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:51:11.488171 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:51:11.488197 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:51:11.505036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:51:11.507616 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:51:11.512312 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 9 23:51:11.514837 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:11.514875 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:11.523305 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:11.523352 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:11.525191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:51:11.550581 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:51:11.555316 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:51:11.563050 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:51:11.567361 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:51:11.653295 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:51:11.655739 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:51:11.657522 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:51:11.671305 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:11.688380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:51:11.699885 ignition[949]: INFO : Ignition 2.21.0 Sep 9 23:51:11.699885 ignition[949]: INFO : Stage: mount Sep 9 23:51:11.701533 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:11.701533 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:11.703507 ignition[949]: INFO : mount: mount passed Sep 9 23:51:11.703507 ignition[949]: INFO : Ignition finished successfully Sep 9 23:51:11.706027 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:51:11.707811 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:51:11.985373 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:51:11.987883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:51:12.011608 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 9 23:51:12.011644 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:12.011656 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:12.014407 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:12.014450 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:12.015938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:51:12.048468 ignition[980]: INFO : Ignition 2.21.0 Sep 9 23:51:12.048468 ignition[980]: INFO : Stage: files Sep 9 23:51:12.050898 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:12.050898 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:12.050898 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:51:12.054299 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:51:12.054299 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:51:12.054299 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:51:12.054299 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:51:12.054299 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:51:12.053612 unknown[980]: wrote ssh authorized keys file for user: core Sep 9 23:51:12.061694 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:51:12.061694 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 23:51:12.115140 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:51:12.921619 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:51:12.923806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:51:12.937817 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 23:51:13.090443 systemd-networkd[796]: eth0: Gained IPv6LL Sep 9 23:51:13.284806 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 23:51:13.707611 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 23:51:13.707611 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 23:51:13.711615 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 23:51:13.714596 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:51:13.753853 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:51:13.757973 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:51:13.760974 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:51:13.760974 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:51:13.760974 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:51:13.760974 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:51:13.760974 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:51:13.760974 ignition[980]: INFO : files: files passed Sep 9 23:51:13.760974 ignition[980]: INFO : Ignition finished successfully Sep 9 23:51:13.761677 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:51:13.766074 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:51:13.768299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:51:13.796441 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:51:13.796583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:51:13.800126 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:51:13.802439 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:13.802439 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:13.809380 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:13.804829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:51:13.806516 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:51:13.811184 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:51:13.863536 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:51:13.864399 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:51:13.866751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:51:13.867766 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:51:13.869251 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:51:13.870232 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:51:13.906159 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:51:13.908821 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:51:13.929217 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:13.930436 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:13.932552 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:51:13.934107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:51:13.934356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:51:13.936993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:51:13.939113 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:51:13.940774 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:51:13.942392 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:51:13.944313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:51:13.946417 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:51:13.948347 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:51:13.950056 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:51:13.951884 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:51:13.953856 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:51:13.955503 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:51:13.957012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:51:13.957153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:51:13.959306 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:13.961194 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:13.963483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:51:13.964423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:13.965557 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:51:13.965716 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:51:13.972739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:51:13.972894 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:51:13.974989 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:51:13.976527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:51:13.981391 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:13.982542 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:51:13.984599 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:51:13.986437 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:51:13.986541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:51:13.988569 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:51:13.988721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:51:13.990085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:51:13.990569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:51:13.992323 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:51:13.992434 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:51:13.997776 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:51:13.998764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:51:13.998900 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:14.001520 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:51:14.003315 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:51:14.003479 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:14.005053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:51:14.005250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:51:14.010751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:51:14.014762 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:51:14.023993 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:51:14.030663 ignition[1035]: INFO : Ignition 2.21.0 Sep 9 23:51:14.030663 ignition[1035]: INFO : Stage: umount Sep 9 23:51:14.033227 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:14.033227 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:14.033227 ignition[1035]: INFO : umount: umount passed Sep 9 23:51:14.033227 ignition[1035]: INFO : Ignition finished successfully Sep 9 23:51:14.034570 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:51:14.034672 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:51:14.036135 systemd[1]: Stopped target network.target - Network. Sep 9 23:51:14.037581 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:51:14.037675 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:51:14.039159 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:51:14.039202 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:51:14.040770 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:51:14.040859 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:51:14.042638 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:51:14.042679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:51:14.044477 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:51:14.046076 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:51:14.055634 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:51:14.055750 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:51:14.060411 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:51:14.060718 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:51:14.060765 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:14.063042 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:51:14.068174 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:51:14.068336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:51:14.072034 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:51:14.072215 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:51:14.074178 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:51:14.074222 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:14.076816 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:51:14.078395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:51:14.078466 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:51:14.080105 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:51:14.080164 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:14.085461 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:51:14.085514 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:14.087007 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:14.090439 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:51:14.096600 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:51:14.096703 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:51:14.098269 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:51:14.098650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:51:14.105981 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:51:14.106147 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:14.108196 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:51:14.108234 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:14.109705 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:51:14.109741 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:14.111061 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:51:14.111111 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:51:14.113443 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:51:14.113518 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:51:14.116038 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:51:14.116095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:51:14.119563 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:51:14.120582 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:51:14.120644 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:14.123358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:51:14.123406 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:14.126065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:51:14.126127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:14.129682 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:51:14.133454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:51:14.139096 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:51:14.139229 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:51:14.141530 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:51:14.143852 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:51:14.179926 systemd[1]: Switching root. Sep 9 23:51:14.220191 systemd-journald[244]: Journal stopped Sep 9 23:51:15.094492 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 23:51:15.094546 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:51:15.094566 kernel: SELinux: policy capability open_perms=1 Sep 9 23:51:15.094576 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:51:15.094586 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:51:15.094595 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:51:15.094605 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:51:15.094615 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:51:15.094625 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:51:15.094634 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:51:15.094644 kernel: audit: type=1403 audit(1757461874.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:51:15.094658 systemd[1]: Successfully loaded SELinux policy in 70.026ms. Sep 9 23:51:15.094680 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.819ms. Sep 9 23:51:15.094691 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:51:15.094702 systemd[1]: Detected virtualization kvm. Sep 9 23:51:15.094714 systemd[1]: Detected architecture arm64. Sep 9 23:51:15.094724 systemd[1]: Detected first boot. Sep 9 23:51:15.094735 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:51:15.094745 zram_generator::config[1082]: No configuration found. Sep 9 23:51:15.094756 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:51:15.094766 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:51:15.094777 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:51:15.094788 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:51:15.094798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:51:15.094809 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:51:15.094820 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:51:15.094831 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:51:15.094842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:51:15.094852 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:51:15.094863 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:51:15.094877 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:51:15.094888 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:51:15.094902 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:51:15.094913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:15.094924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:15.094935 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:51:15.094946 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:51:15.094956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:51:15.094967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:51:15.094977 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:51:15.094988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:15.095000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:15.095011 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:51:15.095022 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:51:15.095032 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:51:15.095043 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:51:15.095054 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:15.095067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:51:15.095077 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:51:15.095090 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:51:15.095100 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:51:15.095110 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:51:15.095120 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:51:15.095131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:15.095151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:15.095163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:15.095174 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:51:15.095184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:51:15.095197 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:51:15.095208 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:51:15.095218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:51:15.095229 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:51:15.095240 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:51:15.095250 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:51:15.095260 systemd[1]: Reached target machines.target - Containers. Sep 9 23:51:15.095271 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:51:15.095290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:15.095303 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:51:15.095314 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:51:15.095325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:15.095335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:51:15.095345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:15.095356 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:51:15.095366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:15.095428 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:51:15.095446 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:51:15.095458 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:51:15.095468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:51:15.095479 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:51:15.095491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:15.095501 kernel: fuse: init (API version 7.41) Sep 9 23:51:15.095511 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:51:15.095521 kernel: ACPI: bus type drm_connector registered Sep 9 23:51:15.095531 kernel: loop: module loaded Sep 9 23:51:15.095543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:51:15.095554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:51:15.095565 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:51:15.095576 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:51:15.095588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:51:15.095599 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:51:15.095609 systemd[1]: Stopped verity-setup.service. Sep 9 23:51:15.095655 systemd-journald[1157]: Collecting audit messages is disabled. Sep 9 23:51:15.095677 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:51:15.095689 systemd-journald[1157]: Journal started Sep 9 23:51:15.095711 systemd-journald[1157]: Runtime Journal (/run/log/journal/e0e1741b99cd42c788dcee98e75a50a2) is 6M, max 48.5M, 42.4M free. Sep 9 23:51:14.841731 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:51:14.862182 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:51:14.862748 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:51:15.098643 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:51:15.099477 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:51:15.100734 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:51:15.101779 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:51:15.102808 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:51:15.104036 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:51:15.107331 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:51:15.108808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:15.110453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:51:15.110627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:51:15.112037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:15.112214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:15.113617 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:51:15.113780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:51:15.115243 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:15.115445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:15.118076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:51:15.118261 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:51:15.119608 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:15.119769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:15.121667 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:15.122943 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:15.124516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:51:15.128400 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:51:15.141697 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:51:15.144499 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:51:15.146760 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:51:15.147973 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:51:15.148005 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:51:15.150018 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:51:15.162198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:51:15.163488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:15.165004 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:51:15.167077 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:51:15.168697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:51:15.170583 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:51:15.171897 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:51:15.174132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:51:15.178946 systemd-journald[1157]: Time spent on flushing to /var/log/journal/e0e1741b99cd42c788dcee98e75a50a2 is 17.058ms for 883 entries. Sep 9 23:51:15.178946 systemd-journald[1157]: System Journal (/var/log/journal/e0e1741b99cd42c788dcee98e75a50a2) is 8M, max 195.6M, 187.6M free. Sep 9 23:51:15.222526 systemd-journald[1157]: Received client request to flush runtime journal. Sep 9 23:51:15.222565 kernel: loop0: detected capacity change from 0 to 207008 Sep 9 23:51:15.177546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:51:15.181835 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:51:15.186087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:15.187700 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:51:15.190741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:51:15.201317 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:51:15.203228 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:51:15.206659 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:51:15.219717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:15.226949 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:51:15.233988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:51:15.236543 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:51:15.240248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:51:15.245354 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:51:15.260320 kernel: loop1: detected capacity change from 0 to 119320 Sep 9 23:51:15.264809 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 9 23:51:15.265178 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Sep 9 23:51:15.268938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:15.290312 kernel: loop2: detected capacity change from 0 to 100608 Sep 9 23:51:15.332322 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 23:51:15.339300 kernel: loop4: detected capacity change from 0 to 119320 Sep 9 23:51:15.344312 kernel: loop5: detected capacity change from 0 to 100608 Sep 9 23:51:15.348046 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:51:15.348782 (sd-merge)[1222]: Merged extensions into '/usr'. Sep 9 23:51:15.354504 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:51:15.354523 systemd[1]: Reloading... Sep 9 23:51:15.417472 zram_generator::config[1251]: No configuration found. Sep 9 23:51:15.465277 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:51:15.567882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:51:15.567955 systemd[1]: Reloading finished in 213 ms. Sep 9 23:51:15.582898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:51:15.585399 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:51:15.599518 systemd[1]: Starting ensure-sysext.service... Sep 9 23:51:15.601429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:51:15.610653 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:51:15.610672 systemd[1]: Reloading... Sep 9 23:51:15.624589 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:51:15.624618 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:51:15.624882 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:51:15.625069 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:51:15.627676 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:51:15.627912 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 9 23:51:15.627963 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 9 23:51:15.632465 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:51:15.632480 systemd-tmpfiles[1283]: Skipping /boot Sep 9 23:51:15.644053 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:51:15.644067 systemd-tmpfiles[1283]: Skipping /boot Sep 9 23:51:15.653310 zram_generator::config[1310]: No configuration found. Sep 9 23:51:15.807406 systemd[1]: Reloading finished in 196 ms. Sep 9 23:51:15.832900 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:51:15.838474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:15.854356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:51:15.856776 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:51:15.867045 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:51:15.873484 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:51:15.875869 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:15.878451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:51:15.886884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:15.897127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:15.899407 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:15.901695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:15.902803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:15.902921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:15.904833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:51:15.908596 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:51:15.910468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:15.910668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:15.912448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:15.912633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:15.914217 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:15.914436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:15.920334 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:51:15.925152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:51:15.925467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:51:15.929309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:51:15.930090 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:51:15.930727 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 9 23:51:15.930846 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:51:15.935750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:15.939153 augenrules[1384]: No rules Sep 9 23:51:15.940273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:15.943566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:15.955744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:15.957041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:15.957177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:15.957260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:51:15.958198 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:51:15.958653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:51:15.960444 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:51:15.962058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:15.962208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:15.963734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:15.963888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:15.967245 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:15.968849 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:15.969042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:15.975150 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:51:15.987316 systemd[1]: Finished ensure-sysext.service. Sep 9 23:51:16.002341 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:51:16.003229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:16.004545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:16.008635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:51:16.015296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:16.019454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:16.021442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:16.021488 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:16.024108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:51:16.029668 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:51:16.030577 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:51:16.045324 augenrules[1429]: /sbin/augenrules: No change Sep 9 23:51:16.051399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:16.051947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:16.053073 augenrules[1458]: No rules Sep 9 23:51:16.055480 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:51:16.055734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:51:16.057322 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:51:16.057632 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:51:16.059222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:16.059384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:16.060735 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:16.060950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:16.068547 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:51:16.068637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:51:16.068675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:51:16.115739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:51:16.119121 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:51:16.150990 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:51:16.170208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:16.186546 systemd-resolved[1350]: Positive Trust Anchors: Sep 9 23:51:16.186567 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:51:16.186598 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:51:16.194863 systemd-resolved[1350]: Defaulting to hostname 'linux'. Sep 9 23:51:16.202919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:51:16.204257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:16.212041 systemd-networkd[1436]: lo: Link UP Sep 9 23:51:16.213307 systemd-networkd[1436]: lo: Gained carrier Sep 9 23:51:16.214202 systemd-networkd[1436]: Enumeration completed Sep 9 23:51:16.214382 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:51:16.214808 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:16.214898 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:51:16.215524 systemd-networkd[1436]: eth0: Link UP Sep 9 23:51:16.215722 systemd-networkd[1436]: eth0: Gained carrier Sep 9 23:51:16.215833 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:16.216457 systemd[1]: Reached target network.target - Network. Sep 9 23:51:16.220349 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:51:16.223440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:51:16.245408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:16.246772 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:51:16.248164 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:51:16.249118 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:51:16.250293 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:51:16.250558 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:51:16.251323 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Sep 9 23:51:16.251477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:51:16.252923 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:51:16.252976 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:51:16.253002 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:51:16.253318 systemd-timesyncd[1442]: Initial clock synchronization to Tue 2025-09-09 23:51:16.649112 UTC. Sep 9 23:51:16.254125 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:51:16.255227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:51:16.256386 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:51:16.257308 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:51:16.259905 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:51:16.262153 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:51:16.264817 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:51:16.265972 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:51:16.266986 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:51:16.270154 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:51:16.271383 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:51:16.274330 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:51:16.275728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:51:16.277274 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:51:16.278159 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:51:16.279161 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:51:16.279195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:51:16.280354 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:51:16.282244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:51:16.284141 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:51:16.291206 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:51:16.293199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:51:16.294266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:51:16.295267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:51:16.298398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:51:16.300357 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:51:16.301607 jq[1504]: false Sep 9 23:51:16.305573 extend-filesystems[1505]: Found /dev/vda6 Sep 9 23:51:16.305536 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:51:16.310685 extend-filesystems[1505]: Found /dev/vda9 Sep 9 23:51:16.312153 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:51:16.313746 extend-filesystems[1505]: Checking size of /dev/vda9 Sep 9 23:51:16.314139 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:51:16.314745 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:51:16.315559 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:51:16.317778 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:51:16.324824 extend-filesystems[1505]: Resized partition /dev/vda9 Sep 9 23:51:16.328001 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:51:16.329827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:51:16.330755 extend-filesystems[1531]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:51:16.330008 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:51:16.334600 jq[1526]: true Sep 9 23:51:16.330264 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:51:16.330489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:51:16.336222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:51:16.336403 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:51:16.338302 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:51:16.356776 update_engine[1523]: I20250909 23:51:16.356541 1523 main.cc:92] Flatcar Update Engine starting Sep 9 23:51:16.369741 (ntainerd)[1537]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:51:16.383213 jq[1533]: true Sep 9 23:51:16.384532 tar[1532]: linux-arm64/LICENSE Sep 9 23:51:16.384799 tar[1532]: linux-arm64/helm Sep 9 23:51:16.391926 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:51:16.400211 dbus-daemon[1502]: [system] SELinux support is enabled Sep 9 23:51:16.400433 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:51:16.412637 update_engine[1523]: I20250909 23:51:16.409471 1523 update_check_scheduler.cc:74] Next update check in 4m49s Sep 9 23:51:16.403606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:51:16.403632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:51:16.406367 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:51:16.406385 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:51:16.410540 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:51:16.412403 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:51:16.412965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:51:16.413464 systemd-logind[1520]: New seat seat0. Sep 9 23:51:16.417412 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:51:16.417412 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:51:16.417412 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:51:16.417364 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:51:16.425147 extend-filesystems[1505]: Resized filesystem in /dev/vda9 Sep 9 23:51:16.418556 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:51:16.419292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:51:16.452880 bash[1564]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:51:16.456507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:51:16.458147 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:51:16.470172 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:51:16.551406 containerd[1537]: time="2025-09-09T23:51:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:51:16.551923 containerd[1537]: time="2025-09-09T23:51:16.551891840Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:51:16.565403 containerd[1537]: time="2025-09-09T23:51:16.565302320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.76µs" Sep 9 23:51:16.565403 containerd[1537]: time="2025-09-09T23:51:16.565340280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:51:16.565403 containerd[1537]: time="2025-09-09T23:51:16.565358200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:51:16.565755 containerd[1537]: time="2025-09-09T23:51:16.565730160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:51:16.565800 containerd[1537]: time="2025-09-09T23:51:16.565758440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:51:16.565800 containerd[1537]: time="2025-09-09T23:51:16.565784160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:51:16.565879 containerd[1537]: time="2025-09-09T23:51:16.565837480Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:51:16.565879 containerd[1537]: time="2025-09-09T23:51:16.565854680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566121 containerd[1537]: time="2025-09-09T23:51:16.566095280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566121 containerd[1537]: time="2025-09-09T23:51:16.566120400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566183 containerd[1537]: time="2025-09-09T23:51:16.566142320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566183 containerd[1537]: time="2025-09-09T23:51:16.566153120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566314 containerd[1537]: time="2025-09-09T23:51:16.566225920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566471 containerd[1537]: time="2025-09-09T23:51:16.566447080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566542 containerd[1537]: time="2025-09-09T23:51:16.566491400Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:51:16.566542 containerd[1537]: time="2025-09-09T23:51:16.566510960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:51:16.566600 containerd[1537]: time="2025-09-09T23:51:16.566544040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:51:16.566921 containerd[1537]: time="2025-09-09T23:51:16.566817000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:51:16.566921 containerd[1537]: time="2025-09-09T23:51:16.566883000Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:51:16.576587 containerd[1537]: time="2025-09-09T23:51:16.576538760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576612360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576627160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576638800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576650920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576663880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576675120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576686520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576702200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576712720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576722640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:51:16.576753 containerd[1537]: time="2025-09-09T23:51:16.576738240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:51:16.577083 containerd[1537]: time="2025-09-09T23:51:16.576886760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:51:16.577083 containerd[1537]: time="2025-09-09T23:51:16.576907160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:51:16.577083 containerd[1537]: time="2025-09-09T23:51:16.576922720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:51:16.577083 containerd[1537]: time="2025-09-09T23:51:16.576933920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:51:16.577083 containerd[1537]: time="2025-09-09T23:51:16.577024320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577201280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577233720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577269200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577301920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577318760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:51:16.577359 containerd[1537]: time="2025-09-09T23:51:16.577335240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:51:16.577981 containerd[1537]: time="2025-09-09T23:51:16.577949000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:51:16.577981 containerd[1537]: time="2025-09-09T23:51:16.577986360Z" level=info msg="Start snapshots syncer" Sep 9 23:51:16.578069 containerd[1537]: time="2025-09-09T23:51:16.578017560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:51:16.579309 containerd[1537]: time="2025-09-09T23:51:16.579180520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:51:16.579583 containerd[1537]: time="2025-09-09T23:51:16.579559600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:51:16.579927 containerd[1537]: time="2025-09-09T23:51:16.579794320Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:51:16.580535 containerd[1537]: time="2025-09-09T23:51:16.580456280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:51:16.580628 containerd[1537]: time="2025-09-09T23:51:16.580611920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:51:16.580732 containerd[1537]: time="2025-09-09T23:51:16.580715920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:51:16.580789 containerd[1537]: time="2025-09-09T23:51:16.580776520Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:51:16.580845 containerd[1537]: time="2025-09-09T23:51:16.580831280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:51:16.580952 containerd[1537]: time="2025-09-09T23:51:16.580934800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:51:16.581027 containerd[1537]: time="2025-09-09T23:51:16.581011480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:51:16.581120 containerd[1537]: time="2025-09-09T23:51:16.581103600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581221840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581246560Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581318840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581337760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581347240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581358560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581366640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581385680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581398880Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581477960Z" level=info msg="runtime interface created" Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581483640Z" level=info msg="created NRI interface" Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581497040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581510320Z" level=info msg="Connect containerd service" Sep 9 23:51:16.582311 containerd[1537]: time="2025-09-09T23:51:16.581541960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:51:16.582651 containerd[1537]: time="2025-09-09T23:51:16.582618800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:51:16.660217 containerd[1537]: time="2025-09-09T23:51:16.660136560Z" level=info msg="Start subscribing containerd event" Sep 9 23:51:16.660338 containerd[1537]: time="2025-09-09T23:51:16.660231600Z" level=info msg="Start recovering state" Sep 9 23:51:16.660338 containerd[1537]: time="2025-09-09T23:51:16.660321160Z" level=info msg="Start event monitor" Sep 9 23:51:16.660338 containerd[1537]: time="2025-09-09T23:51:16.660334800Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:51:16.660419 containerd[1537]: time="2025-09-09T23:51:16.660343320Z" level=info msg="Start streaming server" Sep 9 23:51:16.660419 containerd[1537]: time="2025-09-09T23:51:16.660352160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:51:16.660419 containerd[1537]: time="2025-09-09T23:51:16.660358880Z" level=info msg="runtime interface starting up..." Sep 9 23:51:16.660419 containerd[1537]: time="2025-09-09T23:51:16.660364280Z" level=info msg="starting plugins..." Sep 9 23:51:16.660419 containerd[1537]: time="2025-09-09T23:51:16.660378240Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:51:16.660642 containerd[1537]: time="2025-09-09T23:51:16.660189120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:51:16.660824 containerd[1537]: time="2025-09-09T23:51:16.660756120Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:51:16.660973 containerd[1537]: time="2025-09-09T23:51:16.660959720Z" level=info msg="containerd successfully booted in 0.109935s" Sep 9 23:51:16.661185 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:51:16.697313 tar[1532]: linux-arm64/README.md Sep 9 23:51:16.713889 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:51:16.907506 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:51:16.933529 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:51:16.936390 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:51:16.955074 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:51:16.955342 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:51:16.961209 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:51:16.988566 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:51:16.991146 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:51:16.993174 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:51:16.994298 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:51:17.509892 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 9 23:51:17.513467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:51:17.514912 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:51:17.517215 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:51:17.519900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:17.521940 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:51:17.561949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:51:17.573403 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:51:17.573681 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:51:17.576966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:51:18.252162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:18.253872 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:51:18.256551 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:51:18.259429 systemd[1]: Startup finished in 2.034s (kernel) + 5.778s (initrd) + 3.916s (userspace) = 11.729s. Sep 9 23:51:18.683660 kubelet[1634]: E0909 23:51:18.683524 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:51:18.686152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:51:18.686398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:51:18.686692 systemd[1]: kubelet.service: Consumed 759ms CPU time, 257.8M memory peak. Sep 9 23:51:21.958204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:51:21.959707 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:49966.service - OpenSSH per-connection server daemon (10.0.0.1:49966). Sep 9 23:51:22.040314 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 49966 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:22.042016 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:22.048684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:51:22.050485 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:51:22.058207 systemd-logind[1520]: New session 1 of user core. Sep 9 23:51:22.069873 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:51:22.073156 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:51:22.102944 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:51:22.105948 systemd-logind[1520]: New session c1 of user core. Sep 9 23:51:22.242225 systemd[1652]: Queued start job for default target default.target. Sep 9 23:51:22.254435 systemd[1652]: Created slice app.slice - User Application Slice. Sep 9 23:51:22.254469 systemd[1652]: Reached target paths.target - Paths. Sep 9 23:51:22.254510 systemd[1652]: Reached target timers.target - Timers. Sep 9 23:51:22.255858 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:51:22.266089 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:51:22.266167 systemd[1652]: Reached target sockets.target - Sockets. Sep 9 23:51:22.266208 systemd[1652]: Reached target basic.target - Basic System. Sep 9 23:51:22.266236 systemd[1652]: Reached target default.target - Main User Target. Sep 9 23:51:22.266262 systemd[1652]: Startup finished in 152ms. Sep 9 23:51:22.266472 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:51:22.276680 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:51:22.344060 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:49970.service - OpenSSH per-connection server daemon (10.0.0.1:49970). Sep 9 23:51:22.422764 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 49970 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:22.424330 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:22.429818 systemd-logind[1520]: New session 2 of user core. Sep 9 23:51:22.441511 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:51:22.496900 sshd[1666]: Connection closed by 10.0.0.1 port 49970 Sep 9 23:51:22.497146 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 9 23:51:22.511858 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:49970.service: Deactivated successfully. Sep 9 23:51:22.514816 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:51:22.516994 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:51:22.517663 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). Sep 9 23:51:22.519145 systemd-logind[1520]: Removed session 2. Sep 9 23:51:22.579431 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:22.581654 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:22.588610 systemd-logind[1520]: New session 3 of user core. Sep 9 23:51:22.607530 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:51:22.661645 sshd[1675]: Connection closed by 10.0.0.1 port 49978 Sep 9 23:51:22.662018 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 9 23:51:22.675583 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:49978.service: Deactivated successfully. Sep 9 23:51:22.677518 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:51:22.680383 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:51:22.682963 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:49992.service - OpenSSH per-connection server daemon (10.0.0.1:49992). Sep 9 23:51:22.684038 systemd-logind[1520]: Removed session 3. Sep 9 23:51:22.745299 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 49992 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:22.746966 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:22.752583 systemd-logind[1520]: New session 4 of user core. Sep 9 23:51:22.759520 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:51:22.816095 sshd[1684]: Connection closed by 10.0.0.1 port 49992 Sep 9 23:51:22.817782 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 9 23:51:22.829766 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:49992.service: Deactivated successfully. Sep 9 23:51:22.832410 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:51:22.835387 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:51:22.838977 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:49998.service - OpenSSH per-connection server daemon (10.0.0.1:49998). Sep 9 23:51:22.841494 systemd-logind[1520]: Removed session 4. Sep 9 23:51:22.897268 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 49998 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:22.898655 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:22.904401 systemd-logind[1520]: New session 5 of user core. Sep 9 23:51:22.910522 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:51:22.973983 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:51:22.974595 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:51:22.990372 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 9 23:51:22.994337 sshd[1693]: Connection closed by 10.0.0.1 port 49998 Sep 9 23:51:22.995090 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 9 23:51:23.008708 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:49998.service: Deactivated successfully. Sep 9 23:51:23.011873 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:51:23.012688 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:51:23.015240 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:50008.service - OpenSSH per-connection server daemon (10.0.0.1:50008). Sep 9 23:51:23.015860 systemd-logind[1520]: Removed session 5. Sep 9 23:51:23.097795 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 50008 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:23.101771 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:23.105973 systemd-logind[1520]: New session 6 of user core. Sep 9 23:51:23.120452 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:51:23.176223 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:51:23.176522 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:51:23.266790 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 9 23:51:23.272683 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:51:23.272958 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:51:23.283245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:51:23.339054 augenrules[1727]: No rules Sep 9 23:51:23.340572 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:51:23.340778 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:51:23.341839 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 9 23:51:23.343459 sshd[1703]: Connection closed by 10.0.0.1 port 50008 Sep 9 23:51:23.344237 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 9 23:51:23.354944 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:50008.service: Deactivated successfully. Sep 9 23:51:23.356991 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:51:23.358266 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:51:23.361804 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:50016.service - OpenSSH per-connection server daemon (10.0.0.1:50016). Sep 9 23:51:23.366402 systemd-logind[1520]: Removed session 6. Sep 9 23:51:23.425444 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 50016 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:51:23.428800 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:51:23.435865 systemd-logind[1520]: New session 7 of user core. Sep 9 23:51:23.448664 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:51:23.505629 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:51:23.506900 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:51:23.809071 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:51:23.826672 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:51:24.073019 dockerd[1761]: time="2025-09-09T23:51:24.071877168Z" level=info msg="Starting up" Sep 9 23:51:24.073430 dockerd[1761]: time="2025-09-09T23:51:24.073384976Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:51:24.085224 dockerd[1761]: time="2025-09-09T23:51:24.085170846Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:51:24.123373 dockerd[1761]: time="2025-09-09T23:51:24.123322894Z" level=info msg="Loading containers: start." Sep 9 23:51:24.134320 kernel: Initializing XFRM netlink socket Sep 9 23:51:24.346643 systemd-networkd[1436]: docker0: Link UP Sep 9 23:51:24.350888 dockerd[1761]: time="2025-09-09T23:51:24.350827520Z" level=info msg="Loading containers: done." Sep 9 23:51:24.363910 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3839172876-merged.mount: Deactivated successfully. Sep 9 23:51:24.370516 dockerd[1761]: time="2025-09-09T23:51:24.370466668Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:51:24.370643 dockerd[1761]: time="2025-09-09T23:51:24.370570530Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:51:24.370689 dockerd[1761]: time="2025-09-09T23:51:24.370670395Z" level=info msg="Initializing buildkit" Sep 9 23:51:24.398750 dockerd[1761]: time="2025-09-09T23:51:24.398712713Z" level=info msg="Completed buildkit initialization" Sep 9 23:51:24.403754 dockerd[1761]: time="2025-09-09T23:51:24.403696033Z" level=info msg="Daemon has completed initialization" Sep 9 23:51:24.403963 dockerd[1761]: time="2025-09-09T23:51:24.403777589Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:51:24.403958 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:51:25.039227 containerd[1537]: time="2025-09-09T23:51:25.039178987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 23:51:25.788789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769858678.mount: Deactivated successfully. Sep 9 23:51:26.891101 containerd[1537]: time="2025-09-09T23:51:26.890978759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:26.892441 containerd[1537]: time="2025-09-09T23:51:26.892254276Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 9 23:51:26.893539 containerd[1537]: time="2025-09-09T23:51:26.893496747Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:26.899190 containerd[1537]: time="2025-09-09T23:51:26.898683089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:26.899906 containerd[1537]: time="2025-09-09T23:51:26.899870837Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.860638295s" Sep 9 23:51:26.899978 containerd[1537]: time="2025-09-09T23:51:26.899912488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 23:51:26.900705 containerd[1537]: time="2025-09-09T23:51:26.900635213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 23:51:28.153373 containerd[1537]: time="2025-09-09T23:51:28.153319646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:28.155279 containerd[1537]: time="2025-09-09T23:51:28.155066498Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 9 23:51:28.157908 containerd[1537]: time="2025-09-09T23:51:28.157872046Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:28.161883 containerd[1537]: time="2025-09-09T23:51:28.161832857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:28.162922 containerd[1537]: time="2025-09-09T23:51:28.162885281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.262216771s" Sep 9 23:51:28.163115 containerd[1537]: time="2025-09-09T23:51:28.163014335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 23:51:28.163473 containerd[1537]: time="2025-09-09T23:51:28.163453001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 23:51:28.936720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:51:28.939581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:29.085688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:29.104672 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:51:29.158332 kubelet[2051]: E0909 23:51:29.157157 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:51:29.161265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:51:29.161421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:51:29.162023 systemd[1]: kubelet.service: Consumed 155ms CPU time, 105.8M memory peak. Sep 9 23:51:29.433094 containerd[1537]: time="2025-09-09T23:51:29.432482987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:29.435194 containerd[1537]: time="2025-09-09T23:51:29.435155304Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 9 23:51:29.436287 containerd[1537]: time="2025-09-09T23:51:29.436247735Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:29.441323 containerd[1537]: time="2025-09-09T23:51:29.440444025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:29.444079 containerd[1537]: time="2025-09-09T23:51:29.443569458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.280084998s" Sep 9 23:51:29.444079 containerd[1537]: time="2025-09-09T23:51:29.443614220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 23:51:29.444300 containerd[1537]: time="2025-09-09T23:51:29.444265211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 23:51:30.509394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527670660.mount: Deactivated successfully. Sep 9 23:51:31.081899 containerd[1537]: time="2025-09-09T23:51:31.081843222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:31.082517 containerd[1537]: time="2025-09-09T23:51:31.082479163Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 9 23:51:31.087967 containerd[1537]: time="2025-09-09T23:51:31.087888326Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:31.091124 containerd[1537]: time="2025-09-09T23:51:31.090368406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:31.091489 containerd[1537]: time="2025-09-09T23:51:31.091465442Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.647164779s" Sep 9 23:51:31.091547 containerd[1537]: time="2025-09-09T23:51:31.091495147Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 23:51:31.092181 containerd[1537]: time="2025-09-09T23:51:31.092108637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:51:31.646353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645813255.mount: Deactivated successfully. Sep 9 23:51:32.580223 containerd[1537]: time="2025-09-09T23:51:32.579743151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:32.581494 containerd[1537]: time="2025-09-09T23:51:32.581457171Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 23:51:32.583895 containerd[1537]: time="2025-09-09T23:51:32.583834031Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:32.587338 containerd[1537]: time="2025-09-09T23:51:32.587272378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:32.589740 containerd[1537]: time="2025-09-09T23:51:32.589427084Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.497283915s" Sep 9 23:51:32.589740 containerd[1537]: time="2025-09-09T23:51:32.589479914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:51:32.590564 containerd[1537]: time="2025-09-09T23:51:32.590481202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:51:33.065556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534625586.mount: Deactivated successfully. Sep 9 23:51:33.077740 containerd[1537]: time="2025-09-09T23:51:33.077654330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:51:33.078627 containerd[1537]: time="2025-09-09T23:51:33.078575808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:51:33.080311 containerd[1537]: time="2025-09-09T23:51:33.080236584Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:51:33.083600 containerd[1537]: time="2025-09-09T23:51:33.083271227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:51:33.084206 containerd[1537]: time="2025-09-09T23:51:33.084178462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.65853ms" Sep 9 23:51:33.084271 containerd[1537]: time="2025-09-09T23:51:33.084209764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:51:33.084895 containerd[1537]: time="2025-09-09T23:51:33.084860306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 23:51:33.620429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708045631.mount: Deactivated successfully. Sep 9 23:51:35.218322 containerd[1537]: time="2025-09-09T23:51:35.218251780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:35.219309 containerd[1537]: time="2025-09-09T23:51:35.219264494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 9 23:51:35.220372 containerd[1537]: time="2025-09-09T23:51:35.220342419Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:35.224898 containerd[1537]: time="2025-09-09T23:51:35.224840187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:35.225956 containerd[1537]: time="2025-09-09T23:51:35.225912246Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.14101546s" Sep 9 23:51:35.226088 containerd[1537]: time="2025-09-09T23:51:35.226037243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 23:51:39.185014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 23:51:39.189064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:39.433088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:39.437063 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:51:39.479219 kubelet[2209]: E0909 23:51:39.476971 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:51:39.481072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:51:39.481593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:51:39.483381 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.4M memory peak. Sep 9 23:51:40.835425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:40.835973 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.4M memory peak. Sep 9 23:51:40.839103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:40.865211 systemd[1]: Reload requested from client PID 2223 ('systemctl') (unit session-7.scope)... Sep 9 23:51:40.865230 systemd[1]: Reloading... Sep 9 23:51:40.945409 zram_generator::config[2267]: No configuration found. Sep 9 23:51:41.323856 systemd[1]: Reloading finished in 458 ms. Sep 9 23:51:41.393898 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 23:51:41.393987 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 23:51:41.394250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:41.394316 systemd[1]: kubelet.service: Consumed 100ms CPU time, 95M memory peak. Sep 9 23:51:41.395907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:41.549292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:41.554019 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:51:41.597495 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:51:41.597495 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:51:41.597495 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:51:41.597832 kubelet[2312]: I0909 23:51:41.597526 2312 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:51:42.121156 kubelet[2312]: I0909 23:51:42.121027 2312 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:51:42.121338 kubelet[2312]: I0909 23:51:42.121149 2312 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:51:42.121831 kubelet[2312]: I0909 23:51:42.121804 2312 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:51:42.153339 kubelet[2312]: E0909 23:51:42.153247 2312 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:42.154160 kubelet[2312]: I0909 23:51:42.153995 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:51:42.162002 kubelet[2312]: I0909 23:51:42.161977 2312 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:51:42.166535 kubelet[2312]: I0909 23:51:42.166479 2312 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:51:42.167828 kubelet[2312]: I0909 23:51:42.167727 2312 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:51:42.167987 kubelet[2312]: I0909 23:51:42.167792 2312 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:51:42.168122 kubelet[2312]: I0909 23:51:42.168040 2312 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:51:42.168122 kubelet[2312]: I0909 23:51:42.168052 2312 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:51:42.168316 kubelet[2312]: I0909 23:51:42.168259 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:51:42.170716 kubelet[2312]: I0909 23:51:42.170680 2312 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:51:42.170716 kubelet[2312]: I0909 23:51:42.170709 2312 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:51:42.171408 kubelet[2312]: I0909 23:51:42.170734 2312 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:51:42.171408 kubelet[2312]: I0909 23:51:42.170756 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:51:42.174307 kubelet[2312]: W0909 23:51:42.174201 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:42.174307 kubelet[2312]: W0909 23:51:42.174244 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:42.174434 kubelet[2312]: E0909 23:51:42.174275 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:42.174489 kubelet[2312]: E0909 23:51:42.174413 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:42.175870 kubelet[2312]: I0909 23:51:42.175080 2312 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:51:42.175870 kubelet[2312]: I0909 23:51:42.175790 2312 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:51:42.176473 kubelet[2312]: W0909 23:51:42.176454 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:51:42.177804 kubelet[2312]: I0909 23:51:42.177777 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:51:42.177893 kubelet[2312]: I0909 23:51:42.177819 2312 server.go:1287] "Started kubelet" Sep 9 23:51:42.179491 kubelet[2312]: I0909 23:51:42.179452 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:51:42.179997 kubelet[2312]: I0909 23:51:42.179961 2312 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:51:42.181302 kubelet[2312]: E0909 23:51:42.180969 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c249e1970516 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:51:42.177797398 +0000 UTC m=+0.620184149,LastTimestamp:2025-09-09 23:51:42.177797398 +0000 UTC m=+0.620184149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:51:42.181910 kubelet[2312]: I0909 23:51:42.181876 2312 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:51:42.183019 kubelet[2312]: I0909 23:51:42.182934 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:51:42.183550 kubelet[2312]: E0909 23:51:42.183524 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:42.183614 kubelet[2312]: I0909 23:51:42.183563 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:51:42.183725 kubelet[2312]: I0909 23:51:42.183708 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:51:42.183797 kubelet[2312]: I0909 23:51:42.183774 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:51:42.184266 kubelet[2312]: W0909 23:51:42.184113 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:42.184266 kubelet[2312]: E0909 23:51:42.184161 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:42.184655 kubelet[2312]: I0909 23:51:42.184580 2312 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:51:42.184832 kubelet[2312]: I0909 23:51:42.184664 2312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:51:42.185149 kubelet[2312]: E0909 23:51:42.185124 2312 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:51:42.185491 kubelet[2312]: E0909 23:51:42.185448 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Sep 9 23:51:42.185491 kubelet[2312]: I0909 23:51:42.183783 2312 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:51:42.185636 kubelet[2312]: I0909 23:51:42.185618 2312 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:51:42.185805 kubelet[2312]: I0909 23:51:42.185752 2312 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:51:42.198064 kubelet[2312]: I0909 23:51:42.197993 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:51:42.198330 kubelet[2312]: I0909 23:51:42.198307 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:51:42.198330 kubelet[2312]: I0909 23:51:42.198324 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:51:42.198414 kubelet[2312]: I0909 23:51:42.198343 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:51:42.200199 kubelet[2312]: I0909 23:51:42.200172 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:51:42.200426 kubelet[2312]: I0909 23:51:42.200411 2312 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:51:42.200520 kubelet[2312]: I0909 23:51:42.200508 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:51:42.200566 kubelet[2312]: I0909 23:51:42.200558 2312 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:51:42.200662 kubelet[2312]: E0909 23:51:42.200644 2312 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:51:42.283771 kubelet[2312]: E0909 23:51:42.283722 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:42.301197 kubelet[2312]: E0909 23:51:42.301156 2312 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:51:42.384551 kubelet[2312]: E0909 23:51:42.384435 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:42.385993 kubelet[2312]: E0909 23:51:42.385959 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Sep 9 23:51:42.456143 kubelet[2312]: W0909 23:51:42.456046 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:42.456143 kubelet[2312]: E0909 23:51:42.456116 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:42.484575 kubelet[2312]: E0909 23:51:42.484535 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:42.495290 kubelet[2312]: I0909 23:51:42.495240 2312 policy_none.go:49] "None policy: Start" Sep 9 23:51:42.495378 kubelet[2312]: I0909 23:51:42.495321 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:51:42.495378 kubelet[2312]: I0909 23:51:42.495337 2312 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:51:42.500819 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:51:42.502267 kubelet[2312]: E0909 23:51:42.502219 2312 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:51:42.522129 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:51:42.526356 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:51:42.537587 kubelet[2312]: I0909 23:51:42.537405 2312 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:51:42.537715 kubelet[2312]: I0909 23:51:42.537611 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:51:42.537715 kubelet[2312]: I0909 23:51:42.537623 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:51:42.537973 kubelet[2312]: I0909 23:51:42.537856 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:51:42.539700 kubelet[2312]: E0909 23:51:42.539654 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:51:42.539700 kubelet[2312]: E0909 23:51:42.539698 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:51:42.639584 kubelet[2312]: I0909 23:51:42.639068 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:51:42.639904 kubelet[2312]: E0909 23:51:42.639676 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Sep 9 23:51:42.786633 kubelet[2312]: E0909 23:51:42.786588 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Sep 9 23:51:42.841438 kubelet[2312]: I0909 23:51:42.841411 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:51:42.841839 kubelet[2312]: E0909 23:51:42.841810 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Sep 9 23:51:42.911484 systemd[1]: Created slice kubepods-burstable-pod43c1e9c6c38ccb4945b5cbcab2d452e1.slice - libcontainer container kubepods-burstable-pod43c1e9c6c38ccb4945b5cbcab2d452e1.slice. Sep 9 23:51:42.924338 kubelet[2312]: E0909 23:51:42.924300 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:42.927227 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 23:51:42.929288 kubelet[2312]: E0909 23:51:42.929250 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:42.931752 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 23:51:42.933446 kubelet[2312]: E0909 23:51:42.933412 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:42.990089 kubelet[2312]: I0909 23:51:42.990014 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:42.990089 kubelet[2312]: I0909 23:51:42.990061 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:42.990089 kubelet[2312]: I0909 23:51:42.990085 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:42.990266 kubelet[2312]: I0909 23:51:42.990102 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:42.990266 kubelet[2312]: I0909 23:51:42.990122 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:42.990266 kubelet[2312]: I0909 23:51:42.990154 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:42.990266 kubelet[2312]: I0909 23:51:42.990171 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:42.990266 kubelet[2312]: I0909 23:51:42.990213 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:42.990403 kubelet[2312]: I0909 23:51:42.990229 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:43.126294 kubelet[2312]: W0909 23:51:43.126210 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:43.126294 kubelet[2312]: E0909 23:51:43.126291 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:43.227996 containerd[1537]: time="2025-09-09T23:51:43.227908731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43c1e9c6c38ccb4945b5cbcab2d452e1,Namespace:kube-system,Attempt:0,}" Sep 9 23:51:43.230194 containerd[1537]: time="2025-09-09T23:51:43.230121009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 23:51:43.235028 containerd[1537]: time="2025-09-09T23:51:43.234944899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 23:51:43.244020 kubelet[2312]: I0909 23:51:43.243977 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:51:43.244448 kubelet[2312]: E0909 23:51:43.244412 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Sep 9 23:51:43.283960 containerd[1537]: time="2025-09-09T23:51:43.283744100Z" level=info msg="connecting to shim 72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27" address="unix:///run/containerd/s/cee34ad736691fc03af02c27721b7bba0363b57a14039dff87d7f99b2f09e31a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:51:43.304092 containerd[1537]: time="2025-09-09T23:51:43.304029701Z" level=info msg="connecting to shim 20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897" address="unix:///run/containerd/s/f3c60c34d15527c7200c5b5b2be3e4c64bdfb07db137315a548955b6aa47387b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:51:43.306903 containerd[1537]: time="2025-09-09T23:51:43.306830043Z" level=info msg="connecting to shim cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19" address="unix:///run/containerd/s/e24687cdfd41afc7620207d678d36d245c4d3f2602cb56d206a2f9843e72f3d9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:51:43.332503 systemd[1]: Started cri-containerd-72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27.scope - libcontainer container 72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27. Sep 9 23:51:43.338032 systemd[1]: Started cri-containerd-20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897.scope - libcontainer container 20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897. Sep 9 23:51:43.339931 systemd[1]: Started cri-containerd-cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19.scope - libcontainer container cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19. Sep 9 23:51:43.382706 kubelet[2312]: W0909 23:51:43.381540 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:43.382706 kubelet[2312]: E0909 23:51:43.381635 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:43.385361 containerd[1537]: time="2025-09-09T23:51:43.385251667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897\"" Sep 9 23:51:43.390259 containerd[1537]: time="2025-09-09T23:51:43.390092944Z" level=info msg="CreateContainer within sandbox \"20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:51:43.395119 containerd[1537]: time="2025-09-09T23:51:43.394925246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43c1e9c6c38ccb4945b5cbcab2d452e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27\"" Sep 9 23:51:43.399721 containerd[1537]: time="2025-09-09T23:51:43.399673781Z" level=info msg="CreateContainer within sandbox \"72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:51:43.400300 containerd[1537]: time="2025-09-09T23:51:43.400251027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19\"" Sep 9 23:51:43.405016 containerd[1537]: time="2025-09-09T23:51:43.404975445Z" level=info msg="CreateContainer within sandbox \"cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:51:43.411209 containerd[1537]: time="2025-09-09T23:51:43.411156580Z" level=info msg="Container 499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:51:43.421573 containerd[1537]: time="2025-09-09T23:51:43.421513890Z" level=info msg="Container 356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:51:43.428723 containerd[1537]: time="2025-09-09T23:51:43.428671204Z" level=info msg="CreateContainer within sandbox \"20db6ccc77428fb4e3c269753ee4843cf85d39b29809a1545e70cf6fecdd8897\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254\"" Sep 9 23:51:43.429600 containerd[1537]: time="2025-09-09T23:51:43.429564256Z" level=info msg="StartContainer for \"499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254\"" Sep 9 23:51:43.429769 containerd[1537]: time="2025-09-09T23:51:43.429736120Z" level=info msg="Container ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:51:43.431041 containerd[1537]: time="2025-09-09T23:51:43.431002265Z" level=info msg="connecting to shim 499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254" address="unix:///run/containerd/s/f3c60c34d15527c7200c5b5b2be3e4c64bdfb07db137315a548955b6aa47387b" protocol=ttrpc version=3 Sep 9 23:51:43.437437 containerd[1537]: time="2025-09-09T23:51:43.437371729Z" level=info msg="CreateContainer within sandbox \"72aa3555733e21c95f85824618866b218ec43b104a353502b7ea32bf728ffb27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f\"" Sep 9 23:51:43.438211 containerd[1537]: time="2025-09-09T23:51:43.438150565Z" level=info msg="StartContainer for \"356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f\"" Sep 9 23:51:43.439473 containerd[1537]: time="2025-09-09T23:51:43.439423080Z" level=info msg="connecting to shim 356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f" address="unix:///run/containerd/s/cee34ad736691fc03af02c27721b7bba0363b57a14039dff87d7f99b2f09e31a" protocol=ttrpc version=3 Sep 9 23:51:43.449473 containerd[1537]: time="2025-09-09T23:51:43.449353574Z" level=info msg="CreateContainer within sandbox \"cf7b0037e7191e100ba143bd5723811a290470eaa7e012733b94c5262ce48c19\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6\"" Sep 9 23:51:43.450491 systemd[1]: Started cri-containerd-499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254.scope - libcontainer container 499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254. Sep 9 23:51:43.451150 containerd[1537]: time="2025-09-09T23:51:43.450481507Z" level=info msg="StartContainer for \"ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6\"" Sep 9 23:51:43.455769 containerd[1537]: time="2025-09-09T23:51:43.455723359Z" level=info msg="connecting to shim ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6" address="unix:///run/containerd/s/e24687cdfd41afc7620207d678d36d245c4d3f2602cb56d206a2f9843e72f3d9" protocol=ttrpc version=3 Sep 9 23:51:43.475471 systemd[1]: Started cri-containerd-356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f.scope - libcontainer container 356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f. Sep 9 23:51:43.479076 systemd[1]: Started cri-containerd-ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6.scope - libcontainer container ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6. Sep 9 23:51:43.496085 kubelet[2312]: W0909 23:51:43.496027 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Sep 9 23:51:43.496340 kubelet[2312]: E0909 23:51:43.496308 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:51:43.520898 containerd[1537]: time="2025-09-09T23:51:43.520825964Z" level=info msg="StartContainer for \"499776afe2a0028d9c8e464ee09f54d2f51c60b01e281106f0e0d2b1fea90254\" returns successfully" Sep 9 23:51:43.534670 containerd[1537]: time="2025-09-09T23:51:43.534629648Z" level=info msg="StartContainer for \"356215af13689d688a21a0ac46423f790b3767f67cda9189a063132a3c7ccf5f\" returns successfully" Sep 9 23:51:43.537878 containerd[1537]: time="2025-09-09T23:51:43.537849714Z" level=info msg="StartContainer for \"ef572005a5a1426ef6bb5b8cd766e1650d8b0c126eafeee9e9055b4d3c21f8f6\" returns successfully" Sep 9 23:51:43.588380 kubelet[2312]: E0909 23:51:43.588208 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" Sep 9 23:51:44.046292 kubelet[2312]: I0909 23:51:44.046250 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:51:44.211877 kubelet[2312]: E0909 23:51:44.211839 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:44.214121 kubelet[2312]: E0909 23:51:44.214083 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:44.218534 kubelet[2312]: E0909 23:51:44.218505 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:44.815839 kubelet[2312]: I0909 23:51:44.815794 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:51:44.815839 kubelet[2312]: E0909 23:51:44.815843 2312 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 23:51:44.832755 kubelet[2312]: E0909 23:51:44.832721 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:44.932894 kubelet[2312]: E0909 23:51:44.932858 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:45.033553 kubelet[2312]: E0909 23:51:45.033510 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:45.134201 kubelet[2312]: E0909 23:51:45.134084 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:45.220293 kubelet[2312]: E0909 23:51:45.220188 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:45.221159 kubelet[2312]: E0909 23:51:45.221133 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:45.222031 kubelet[2312]: E0909 23:51:45.222010 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 23:51:45.234218 kubelet[2312]: E0909 23:51:45.234181 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:45.334846 kubelet[2312]: E0909 23:51:45.334800 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:45.386063 kubelet[2312]: I0909 23:51:45.385934 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:45.393611 kubelet[2312]: E0909 23:51:45.393337 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:45.393611 kubelet[2312]: I0909 23:51:45.393374 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:45.395482 kubelet[2312]: E0909 23:51:45.395440 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:45.395482 kubelet[2312]: I0909 23:51:45.395475 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:45.397428 kubelet[2312]: E0909 23:51:45.397377 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:46.174506 kubelet[2312]: I0909 23:51:46.174471 2312 apiserver.go:52] "Watching apiserver" Sep 9 23:51:46.184030 kubelet[2312]: I0909 23:51:46.183972 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:51:46.220506 kubelet[2312]: I0909 23:51:46.220454 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:46.598331 kubelet[2312]: I0909 23:51:46.598272 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:46.994089 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Sep 9 23:51:46.994107 systemd[1]: Reloading... Sep 9 23:51:47.057315 zram_generator::config[2629]: No configuration found. Sep 9 23:51:47.220399 systemd[1]: Reloading finished in 225 ms. Sep 9 23:51:47.237152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:47.249362 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:51:47.249607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:47.249670 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 128.1M memory peak. Sep 9 23:51:47.251978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:47.411356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:51:47.416017 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:51:47.454033 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:51:47.454033 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:51:47.454033 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:51:47.454392 kubelet[2670]: I0909 23:51:47.454076 2670 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:51:47.464624 kubelet[2670]: I0909 23:51:47.464583 2670 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 23:51:47.465316 kubelet[2670]: I0909 23:51:47.464772 2670 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:51:47.465316 kubelet[2670]: I0909 23:51:47.465065 2670 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 23:51:47.466601 kubelet[2670]: I0909 23:51:47.466572 2670 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:51:47.469169 kubelet[2670]: I0909 23:51:47.469138 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:51:47.472730 kubelet[2670]: I0909 23:51:47.472711 2670 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:51:47.477003 kubelet[2670]: I0909 23:51:47.476973 2670 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:51:47.477190 kubelet[2670]: I0909 23:51:47.477163 2670 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:51:47.477369 kubelet[2670]: I0909 23:51:47.477192 2670 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:51:47.477466 kubelet[2670]: I0909 23:51:47.477382 2670 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:51:47.477466 kubelet[2670]: I0909 23:51:47.477392 2670 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 23:51:47.477466 kubelet[2670]: I0909 23:51:47.477433 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:51:47.477574 kubelet[2670]: I0909 23:51:47.477563 2670 kubelet.go:446] "Attempting to sync node with API server" Sep 9 23:51:47.477605 kubelet[2670]: I0909 23:51:47.477576 2670 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:51:47.477605 kubelet[2670]: I0909 23:51:47.477599 2670 kubelet.go:352] "Adding apiserver pod source" Sep 9 23:51:47.477649 kubelet[2670]: I0909 23:51:47.477608 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:51:47.478861 kubelet[2670]: I0909 23:51:47.478840 2670 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:51:47.479337 kubelet[2670]: I0909 23:51:47.479321 2670 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:51:47.479757 kubelet[2670]: I0909 23:51:47.479741 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:51:47.479791 kubelet[2670]: I0909 23:51:47.479779 2670 server.go:1287] "Started kubelet" Sep 9 23:51:47.481135 kubelet[2670]: I0909 23:51:47.481036 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:51:47.484310 kubelet[2670]: I0909 23:51:47.482388 2670 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:51:47.484310 kubelet[2670]: I0909 23:51:47.482465 2670 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:51:47.484310 kubelet[2670]: I0909 23:51:47.482615 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:51:47.484310 kubelet[2670]: I0909 23:51:47.483467 2670 server.go:479] "Adding debug handlers to kubelet server" Sep 9 23:51:47.484761 kubelet[2670]: I0909 23:51:47.484736 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:51:47.487761 kubelet[2670]: I0909 23:51:47.487703 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:51:47.488262 kubelet[2670]: E0909 23:51:47.488235 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:51:47.488924 kubelet[2670]: I0909 23:51:47.488897 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:51:47.489158 kubelet[2670]: I0909 23:51:47.489023 2670 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:51:47.489915 kubelet[2670]: I0909 23:51:47.489884 2670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:51:47.491361 kubelet[2670]: I0909 23:51:47.491335 2670 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:51:47.491361 kubelet[2670]: I0909 23:51:47.491359 2670 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:51:47.504198 kubelet[2670]: I0909 23:51:47.504086 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:51:47.508206 kubelet[2670]: I0909 23:51:47.508177 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:51:47.508370 kubelet[2670]: I0909 23:51:47.508359 2670 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 23:51:47.508441 kubelet[2670]: I0909 23:51:47.508430 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:51:47.508490 kubelet[2670]: I0909 23:51:47.508482 2670 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 23:51:47.508589 kubelet[2670]: E0909 23:51:47.508571 2670 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:51:47.535520 kubelet[2670]: I0909 23:51:47.535490 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:51:47.535520 kubelet[2670]: I0909 23:51:47.535510 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:51:47.535520 kubelet[2670]: I0909 23:51:47.535531 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:51:47.535698 kubelet[2670]: I0909 23:51:47.535688 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:51:47.535719 kubelet[2670]: I0909 23:51:47.535699 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:51:47.535719 kubelet[2670]: I0909 23:51:47.535718 2670 policy_none.go:49] "None policy: Start" Sep 9 23:51:47.535783 kubelet[2670]: I0909 23:51:47.535727 2670 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:51:47.535783 kubelet[2670]: I0909 23:51:47.535735 2670 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:51:47.535860 kubelet[2670]: I0909 23:51:47.535841 2670 state_mem.go:75] "Updated machine memory state" Sep 9 23:51:47.539506 kubelet[2670]: I0909 23:51:47.539427 2670 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:51:47.539622 kubelet[2670]: I0909 23:51:47.539599 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:51:47.539647 kubelet[2670]: I0909 23:51:47.539620 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:51:47.539967 kubelet[2670]: I0909 23:51:47.539890 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:51:47.541274 kubelet[2670]: E0909 23:51:47.541246 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:51:47.610196 kubelet[2670]: I0909 23:51:47.610158 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:47.610196 kubelet[2670]: I0909 23:51:47.610322 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:47.610196 kubelet[2670]: I0909 23:51:47.610199 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:47.617394 kubelet[2670]: E0909 23:51:47.617338 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:47.617526 kubelet[2670]: E0909 23:51:47.617429 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:47.641721 kubelet[2670]: I0909 23:51:47.641693 2670 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 23:51:47.650569 kubelet[2670]: I0909 23:51:47.650536 2670 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 23:51:47.651148 kubelet[2670]: I0909 23:51:47.651130 2670 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 23:51:47.689793 kubelet[2670]: I0909 23:51:47.689654 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:47.689793 kubelet[2670]: I0909 23:51:47.689697 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:47.689793 kubelet[2670]: I0909 23:51:47.689717 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:47.689793 kubelet[2670]: I0909 23:51:47.689732 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:51:47.689793 kubelet[2670]: I0909 23:51:47.689747 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:47.690056 kubelet[2670]: I0909 23:51:47.689771 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:47.690056 kubelet[2670]: I0909 23:51:47.689796 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:47.690056 kubelet[2670]: I0909 23:51:47.689836 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43c1e9c6c38ccb4945b5cbcab2d452e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43c1e9c6c38ccb4945b5cbcab2d452e1\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:47.690056 kubelet[2670]: I0909 23:51:47.689882 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:51:48.478630 kubelet[2670]: I0909 23:51:48.478572 2670 apiserver.go:52] "Watching apiserver" Sep 9 23:51:48.489662 kubelet[2670]: I0909 23:51:48.489606 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:51:48.523540 kubelet[2670]: I0909 23:51:48.523349 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.523028192 podStartE2EDuration="2.523028192s" podCreationTimestamp="2025-09-09 23:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:51:48.5220276 +0000 UTC m=+1.102780641" watchObservedRunningTime="2025-09-09 23:51:48.523028192 +0000 UTC m=+1.103781233" Sep 9 23:51:48.524296 kubelet[2670]: I0909 23:51:48.524003 2670 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:48.531109 kubelet[2670]: E0909 23:51:48.531078 2670 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:51:48.539310 kubelet[2670]: I0909 23:51:48.539244 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.539226282 podStartE2EDuration="2.539226282s" podCreationTimestamp="2025-09-09 23:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:51:48.53152502 +0000 UTC m=+1.112278101" watchObservedRunningTime="2025-09-09 23:51:48.539226282 +0000 UTC m=+1.119979323" Sep 9 23:51:48.549727 kubelet[2670]: I0909 23:51:48.549599 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.549582725 podStartE2EDuration="1.549582725s" podCreationTimestamp="2025-09-09 23:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:51:48.539844485 +0000 UTC m=+1.120597526" watchObservedRunningTime="2025-09-09 23:51:48.549582725 +0000 UTC m=+1.130335766" Sep 9 23:51:52.996815 kubelet[2670]: I0909 23:51:52.996756 2670 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:51:52.998100 containerd[1537]: time="2025-09-09T23:51:52.998049065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:51:52.998505 kubelet[2670]: I0909 23:51:52.998253 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:51:53.797816 systemd[1]: Created slice kubepods-besteffort-pod47e07ba7_0ef0_48c9_b6bc_1b0c06307132.slice - libcontainer container kubepods-besteffort-pod47e07ba7_0ef0_48c9_b6bc_1b0c06307132.slice. Sep 9 23:51:53.823471 kubelet[2670]: I0909 23:51:53.823430 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47e07ba7-0ef0-48c9-b6bc-1b0c06307132-lib-modules\") pod \"kube-proxy-xlcr4\" (UID: \"47e07ba7-0ef0-48c9-b6bc-1b0c06307132\") " pod="kube-system/kube-proxy-xlcr4" Sep 9 23:51:53.823471 kubelet[2670]: I0909 23:51:53.823473 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwblp\" (UniqueName: \"kubernetes.io/projected/47e07ba7-0ef0-48c9-b6bc-1b0c06307132-kube-api-access-xwblp\") pod \"kube-proxy-xlcr4\" (UID: \"47e07ba7-0ef0-48c9-b6bc-1b0c06307132\") " pod="kube-system/kube-proxy-xlcr4" Sep 9 23:51:53.823625 kubelet[2670]: I0909 23:51:53.823495 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47e07ba7-0ef0-48c9-b6bc-1b0c06307132-xtables-lock\") pod \"kube-proxy-xlcr4\" (UID: \"47e07ba7-0ef0-48c9-b6bc-1b0c06307132\") " pod="kube-system/kube-proxy-xlcr4" Sep 9 23:51:53.823625 kubelet[2670]: I0909 23:51:53.823514 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47e07ba7-0ef0-48c9-b6bc-1b0c06307132-kube-proxy\") pod \"kube-proxy-xlcr4\" (UID: \"47e07ba7-0ef0-48c9-b6bc-1b0c06307132\") " pod="kube-system/kube-proxy-xlcr4" Sep 9 23:51:54.059106 systemd[1]: Created slice kubepods-besteffort-poda0675214_f7cc_4833_962b_56da128f2310.slice - libcontainer container kubepods-besteffort-poda0675214_f7cc_4833_962b_56da128f2310.slice. Sep 9 23:51:54.110008 containerd[1537]: time="2025-09-09T23:51:54.109967971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlcr4,Uid:47e07ba7-0ef0-48c9-b6bc-1b0c06307132,Namespace:kube-system,Attempt:0,}" Sep 9 23:51:54.125382 kubelet[2670]: I0909 23:51:54.125319 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0675214-f7cc-4833-962b-56da128f2310-var-lib-calico\") pod \"tigera-operator-755d956888-9bbxh\" (UID: \"a0675214-f7cc-4833-962b-56da128f2310\") " pod="tigera-operator/tigera-operator-755d956888-9bbxh" Sep 9 23:51:54.125382 kubelet[2670]: I0909 23:51:54.125389 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gtc\" (UniqueName: \"kubernetes.io/projected/a0675214-f7cc-4833-962b-56da128f2310-kube-api-access-v5gtc\") pod \"tigera-operator-755d956888-9bbxh\" (UID: \"a0675214-f7cc-4833-962b-56da128f2310\") " pod="tigera-operator/tigera-operator-755d956888-9bbxh" Sep 9 23:51:54.128161 containerd[1537]: time="2025-09-09T23:51:54.128117430Z" level=info msg="connecting to shim 4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89" address="unix:///run/containerd/s/b549968c3e7eaaebb53a4169f0ee48771c58c45fa3f86a9a2fe64dab44115035" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:51:54.157517 systemd[1]: Started cri-containerd-4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89.scope - libcontainer container 4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89. Sep 9 23:51:54.184125 containerd[1537]: time="2025-09-09T23:51:54.184085082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xlcr4,Uid:47e07ba7-0ef0-48c9-b6bc-1b0c06307132,Namespace:kube-system,Attempt:0,} returns sandbox id \"4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89\"" Sep 9 23:51:54.187524 containerd[1537]: time="2025-09-09T23:51:54.187483692Z" level=info msg="CreateContainer within sandbox \"4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:51:54.196650 containerd[1537]: time="2025-09-09T23:51:54.196610385Z" level=info msg="Container 0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:51:54.213451 containerd[1537]: time="2025-09-09T23:51:54.213395220Z" level=info msg="CreateContainer within sandbox \"4874364590ea3aa8e7f21ff0c8cf4d7733f834eb7e4253afe314b5ee9f8fef89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d\"" Sep 9 23:51:54.214470 containerd[1537]: time="2025-09-09T23:51:54.214191687Z" level=info msg="StartContainer for \"0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d\"" Sep 9 23:51:54.215620 containerd[1537]: time="2025-09-09T23:51:54.215583214Z" level=info msg="connecting to shim 0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d" address="unix:///run/containerd/s/b549968c3e7eaaebb53a4169f0ee48771c58c45fa3f86a9a2fe64dab44115035" protocol=ttrpc version=3 Sep 9 23:51:54.242470 systemd[1]: Started cri-containerd-0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d.scope - libcontainer container 0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d. Sep 9 23:51:54.273631 containerd[1537]: time="2025-09-09T23:51:54.273520638Z" level=info msg="StartContainer for \"0a59815e0f9ca9b84d030f422b9160296cf63ac8e69ec406c845de81472eea5d\" returns successfully" Sep 9 23:51:54.364097 containerd[1537]: time="2025-09-09T23:51:54.363726720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9bbxh,Uid:a0675214-f7cc-4833-962b-56da128f2310,Namespace:tigera-operator,Attempt:0,}" Sep 9 23:51:54.380706 containerd[1537]: time="2025-09-09T23:51:54.380665003Z" level=info msg="connecting to shim 1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1" address="unix:///run/containerd/s/c71870f79c7b752dcdce42797f1fcde93efe0e0c5ba913ee7e224d370775de86" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:51:54.407479 systemd[1]: Started cri-containerd-1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1.scope - libcontainer container 1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1. Sep 9 23:51:54.453454 containerd[1537]: time="2025-09-09T23:51:54.453356319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-9bbxh,Uid:a0675214-f7cc-4833-962b-56da128f2310,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1\"" Sep 9 23:51:54.456553 containerd[1537]: time="2025-09-09T23:51:54.456522253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 23:51:55.624445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612632639.mount: Deactivated successfully. Sep 9 23:51:55.977352 containerd[1537]: time="2025-09-09T23:51:55.977306937Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:55.978276 containerd[1537]: time="2025-09-09T23:51:55.978150727Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 23:51:55.979516 containerd[1537]: time="2025-09-09T23:51:55.979476740Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:55.981978 containerd[1537]: time="2025-09-09T23:51:55.981930848Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:51:55.982566 containerd[1537]: time="2025-09-09T23:51:55.982523478Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.525963754s" Sep 9 23:51:55.982566 containerd[1537]: time="2025-09-09T23:51:55.982561789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 23:51:55.986053 containerd[1537]: time="2025-09-09T23:51:55.986009006Z" level=info msg="CreateContainer within sandbox \"1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 23:51:55.994874 containerd[1537]: time="2025-09-09T23:51:55.994818039Z" level=info msg="Container 50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:51:55.997833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182725269.mount: Deactivated successfully. Sep 9 23:51:56.000551 containerd[1537]: time="2025-09-09T23:51:56.000517364Z" level=info msg="CreateContainer within sandbox \"1f94d3e44989d8ad4f7dbd81d2f8ed4a22eb148c7aae1e5ab6d0a673c35618b1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8\"" Sep 9 23:51:56.001217 containerd[1537]: time="2025-09-09T23:51:56.001196303Z" level=info msg="StartContainer for \"50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8\"" Sep 9 23:51:56.002195 containerd[1537]: time="2025-09-09T23:51:56.002162112Z" level=info msg="connecting to shim 50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8" address="unix:///run/containerd/s/c71870f79c7b752dcdce42797f1fcde93efe0e0c5ba913ee7e224d370775de86" protocol=ttrpc version=3 Sep 9 23:51:56.028478 systemd[1]: Started cri-containerd-50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8.scope - libcontainer container 50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8. Sep 9 23:51:56.056646 containerd[1537]: time="2025-09-09T23:51:56.056530198Z" level=info msg="StartContainer for \"50122dc3394c26edc677addb7c07b3a5c9310f7e7533df6bead916da184166f8\" returns successfully" Sep 9 23:51:56.560292 kubelet[2670]: I0909 23:51:56.560048 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xlcr4" podStartSLOduration=3.560017629 podStartE2EDuration="3.560017629s" podCreationTimestamp="2025-09-09 23:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:51:54.553422509 +0000 UTC m=+7.134175550" watchObservedRunningTime="2025-09-09 23:51:56.560017629 +0000 UTC m=+9.140770670" Sep 9 23:51:56.560292 kubelet[2670]: I0909 23:51:56.560153 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-9bbxh" podStartSLOduration=1.030623748 podStartE2EDuration="2.560149088s" podCreationTimestamp="2025-09-09 23:51:54 +0000 UTC" firstStartedPulling="2025-09-09 23:51:54.454787479 +0000 UTC m=+7.035540520" lastFinishedPulling="2025-09-09 23:51:55.984312819 +0000 UTC m=+8.565065860" observedRunningTime="2025-09-09 23:51:56.559854187 +0000 UTC m=+9.140607228" watchObservedRunningTime="2025-09-09 23:51:56.560149088 +0000 UTC m=+9.140902129" Sep 9 23:52:01.535941 update_engine[1523]: I20250909 23:52:01.535845 1523 update_attempter.cc:509] Updating boot flags... Sep 9 23:52:02.031953 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 9 23:52:02.034321 sshd[1739]: Connection closed by 10.0.0.1 port 50016 Sep 9 23:52:02.034618 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:02.041113 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:50016.service: Deactivated successfully. Sep 9 23:52:02.043107 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:52:02.043359 systemd[1]: session-7.scope: Consumed 7.471s CPU time, 222.3M memory peak. Sep 9 23:52:02.044372 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:52:02.047079 systemd-logind[1520]: Removed session 7. Sep 9 23:52:07.637256 systemd[1]: Created slice kubepods-besteffort-pod06ae634e_031c_4954_b1c8_59f2afe4f7f0.slice - libcontainer container kubepods-besteffort-pod06ae634e_031c_4954_b1c8_59f2afe4f7f0.slice. Sep 9 23:52:07.708072 kubelet[2670]: I0909 23:52:07.708020 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06ae634e-031c-4954-b1c8-59f2afe4f7f0-tigera-ca-bundle\") pod \"calico-typha-5b46cf49fc-qkg2b\" (UID: \"06ae634e-031c-4954-b1c8-59f2afe4f7f0\") " pod="calico-system/calico-typha-5b46cf49fc-qkg2b" Sep 9 23:52:07.708072 kubelet[2670]: I0909 23:52:07.708073 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/06ae634e-031c-4954-b1c8-59f2afe4f7f0-typha-certs\") pod \"calico-typha-5b46cf49fc-qkg2b\" (UID: \"06ae634e-031c-4954-b1c8-59f2afe4f7f0\") " pod="calico-system/calico-typha-5b46cf49fc-qkg2b" Sep 9 23:52:07.708488 kubelet[2670]: I0909 23:52:07.708093 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qvl\" (UniqueName: \"kubernetes.io/projected/06ae634e-031c-4954-b1c8-59f2afe4f7f0-kube-api-access-k6qvl\") pod \"calico-typha-5b46cf49fc-qkg2b\" (UID: \"06ae634e-031c-4954-b1c8-59f2afe4f7f0\") " pod="calico-system/calico-typha-5b46cf49fc-qkg2b" Sep 9 23:52:07.889443 systemd[1]: Created slice kubepods-besteffort-pod81c69c70_08a1_4ce3_a49c_aca5a6e8dd3f.slice - libcontainer container kubepods-besteffort-pod81c69c70_08a1_4ce3_a49c_aca5a6e8dd3f.slice. Sep 9 23:52:07.910523 kubelet[2670]: I0909 23:52:07.910472 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-cni-bin-dir\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910523 kubelet[2670]: I0909 23:52:07.910524 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-var-run-calico\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910689 kubelet[2670]: I0909 23:52:07.910543 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-flexvol-driver-host\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910689 kubelet[2670]: I0909 23:52:07.910562 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-var-lib-calico\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910689 kubelet[2670]: I0909 23:52:07.910579 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-policysync\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910689 kubelet[2670]: I0909 23:52:07.910594 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rlp8\" (UniqueName: \"kubernetes.io/projected/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-kube-api-access-2rlp8\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910689 kubelet[2670]: I0909 23:52:07.910612 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-node-certs\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910798 kubelet[2670]: I0909 23:52:07.910650 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-xtables-lock\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910798 kubelet[2670]: I0909 23:52:07.910693 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-cni-net-dir\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910798 kubelet[2670]: I0909 23:52:07.910713 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-cni-log-dir\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910798 kubelet[2670]: I0909 23:52:07.910737 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-lib-modules\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.910798 kubelet[2670]: I0909 23:52:07.910751 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f-tigera-ca-bundle\") pod \"calico-node-g5625\" (UID: \"81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f\") " pod="calico-system/calico-node-g5625" Sep 9 23:52:07.940153 containerd[1537]: time="2025-09-09T23:52:07.940109862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b46cf49fc-qkg2b,Uid:06ae634e-031c-4954-b1c8-59f2afe4f7f0,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:07.991159 containerd[1537]: time="2025-09-09T23:52:07.991103404Z" level=info msg="connecting to shim 0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851" address="unix:///run/containerd/s/0d2cfc715bd05bf46069c785526f04b615d6cc6c80afbd0d83fdb57619dbca7e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:08.013203 kubelet[2670]: E0909 23:52:08.013160 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.013203 kubelet[2670]: W0909 23:52:08.013194 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.020820 kubelet[2670]: E0909 23:52:08.020790 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.020820 kubelet[2670]: W0909 23:52:08.020816 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.020920 kubelet[2670]: E0909 23:52:08.020846 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.022345 kubelet[2670]: E0909 23:52:08.022312 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.024360 kubelet[2670]: E0909 23:52:08.024340 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.024398 kubelet[2670]: W0909 23:52:08.024359 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.024398 kubelet[2670]: E0909 23:52:08.024377 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.028836 kubelet[2670]: E0909 23:52:08.028770 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.028836 kubelet[2670]: W0909 23:52:08.028835 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.028933 kubelet[2670]: E0909 23:52:08.028852 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.043692 systemd[1]: Started cri-containerd-0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851.scope - libcontainer container 0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851. Sep 9 23:52:08.104305 kubelet[2670]: E0909 23:52:08.102609 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:08.110014 kubelet[2670]: E0909 23:52:08.109973 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.110136 kubelet[2670]: W0909 23:52:08.110102 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.110136 kubelet[2670]: E0909 23:52:08.110128 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.110978 kubelet[2670]: E0909 23:52:08.110692 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.111060 kubelet[2670]: W0909 23:52:08.110954 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.111240 kubelet[2670]: E0909 23:52:08.111209 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.112194 kubelet[2670]: E0909 23:52:08.112164 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.112194 kubelet[2670]: W0909 23:52:08.112182 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.112330 kubelet[2670]: E0909 23:52:08.112304 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.112578 kubelet[2670]: E0909 23:52:08.112557 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.112578 kubelet[2670]: W0909 23:52:08.112572 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.112736 kubelet[2670]: E0909 23:52:08.112715 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.113195 kubelet[2670]: E0909 23:52:08.113171 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.113259 kubelet[2670]: W0909 23:52:08.113186 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.113259 kubelet[2670]: E0909 23:52:08.113231 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.113699 kubelet[2670]: E0909 23:52:08.113679 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.113958 kubelet[2670]: W0909 23:52:08.113930 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.113990 kubelet[2670]: E0909 23:52:08.113962 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.114409 kubelet[2670]: E0909 23:52:08.114391 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.114409 kubelet[2670]: W0909 23:52:08.114408 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.115011 kubelet[2670]: E0909 23:52:08.114419 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.115410 kubelet[2670]: E0909 23:52:08.115381 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.115678 kubelet[2670]: W0909 23:52:08.115610 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.115678 kubelet[2670]: E0909 23:52:08.115640 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.116196 kubelet[2670]: E0909 23:52:08.116137 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.116260 kubelet[2670]: W0909 23:52:08.116172 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.116260 kubelet[2670]: E0909 23:52:08.116222 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.117293 kubelet[2670]: E0909 23:52:08.117067 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.117293 kubelet[2670]: W0909 23:52:08.117086 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.117293 kubelet[2670]: E0909 23:52:08.117126 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.118401 kubelet[2670]: E0909 23:52:08.118380 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.118401 kubelet[2670]: W0909 23:52:08.118397 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.118489 kubelet[2670]: E0909 23:52:08.118409 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.118826 kubelet[2670]: E0909 23:52:08.118809 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.119045 kubelet[2670]: W0909 23:52:08.118824 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.119076 kubelet[2670]: E0909 23:52:08.119053 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.119532 kubelet[2670]: E0909 23:52:08.119430 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.119532 kubelet[2670]: W0909 23:52:08.119511 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.119532 kubelet[2670]: E0909 23:52:08.119533 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.120353 kubelet[2670]: E0909 23:52:08.120325 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.120353 kubelet[2670]: W0909 23:52:08.120346 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.120994 kubelet[2670]: E0909 23:52:08.120762 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.121165 kubelet[2670]: E0909 23:52:08.121147 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.121165 kubelet[2670]: W0909 23:52:08.121162 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.121230 kubelet[2670]: E0909 23:52:08.121174 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.121359 kubelet[2670]: E0909 23:52:08.121343 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.121359 kubelet[2670]: W0909 23:52:08.121355 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.121430 kubelet[2670]: E0909 23:52:08.121364 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.121795 kubelet[2670]: E0909 23:52:08.121771 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.121795 kubelet[2670]: W0909 23:52:08.121789 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.121876 kubelet[2670]: E0909 23:52:08.121800 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.122391 kubelet[2670]: E0909 23:52:08.122360 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.122391 kubelet[2670]: W0909 23:52:08.122378 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.122391 kubelet[2670]: E0909 23:52:08.122390 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.122879 kubelet[2670]: E0909 23:52:08.122853 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.122879 kubelet[2670]: W0909 23:52:08.122873 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.122956 kubelet[2670]: E0909 23:52:08.122885 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.123078 kubelet[2670]: E0909 23:52:08.123063 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.123078 kubelet[2670]: W0909 23:52:08.123077 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.123136 kubelet[2670]: E0909 23:52:08.123087 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.125304 kubelet[2670]: E0909 23:52:08.124355 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.125304 kubelet[2670]: W0909 23:52:08.124373 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.125304 kubelet[2670]: E0909 23:52:08.124386 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.125304 kubelet[2670]: I0909 23:52:08.124414 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5q48\" (UniqueName: \"kubernetes.io/projected/acbcd775-647a-401e-8ee5-46577cb61830-kube-api-access-z5q48\") pod \"csi-node-driver-fgp8h\" (UID: \"acbcd775-647a-401e-8ee5-46577cb61830\") " pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:08.125434 containerd[1537]: time="2025-09-09T23:52:08.125186780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b46cf49fc-qkg2b,Uid:06ae634e-031c-4954-b1c8-59f2afe4f7f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851\"" Sep 9 23:52:08.126071 kubelet[2670]: E0909 23:52:08.126034 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.126071 kubelet[2670]: W0909 23:52:08.126060 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.126071 kubelet[2670]: E0909 23:52:08.126074 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.126176 kubelet[2670]: I0909 23:52:08.126097 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acbcd775-647a-401e-8ee5-46577cb61830-kubelet-dir\") pod \"csi-node-driver-fgp8h\" (UID: \"acbcd775-647a-401e-8ee5-46577cb61830\") " pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:08.126835 kubelet[2670]: E0909 23:52:08.126805 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.126835 kubelet[2670]: W0909 23:52:08.126827 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.126930 kubelet[2670]: E0909 23:52:08.126843 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.126930 kubelet[2670]: I0909 23:52:08.126864 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/acbcd775-647a-401e-8ee5-46577cb61830-socket-dir\") pod \"csi-node-driver-fgp8h\" (UID: \"acbcd775-647a-401e-8ee5-46577cb61830\") " pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:08.127610 kubelet[2670]: E0909 23:52:08.127581 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.127610 kubelet[2670]: W0909 23:52:08.127604 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.127696 kubelet[2670]: E0909 23:52:08.127679 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.127724 kubelet[2670]: I0909 23:52:08.127705 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/acbcd775-647a-401e-8ee5-46577cb61830-registration-dir\") pod \"csi-node-driver-fgp8h\" (UID: \"acbcd775-647a-401e-8ee5-46577cb61830\") " pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:08.127882 kubelet[2670]: E0909 23:52:08.127862 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.127882 kubelet[2670]: W0909 23:52:08.127875 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.127938 kubelet[2670]: E0909 23:52:08.127926 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.128051 kubelet[2670]: E0909 23:52:08.128034 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.128051 kubelet[2670]: W0909 23:52:08.128046 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.128270 kubelet[2670]: E0909 23:52:08.128242 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.128479 kubelet[2670]: E0909 23:52:08.128456 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.128479 kubelet[2670]: W0909 23:52:08.128473 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.128546 kubelet[2670]: E0909 23:52:08.128488 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.128702 kubelet[2670]: E0909 23:52:08.128684 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.128702 kubelet[2670]: W0909 23:52:08.128698 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.128772 kubelet[2670]: E0909 23:52:08.128711 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.128772 kubelet[2670]: I0909 23:52:08.128730 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/acbcd775-647a-401e-8ee5-46577cb61830-varrun\") pod \"csi-node-driver-fgp8h\" (UID: \"acbcd775-647a-401e-8ee5-46577cb61830\") " pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:08.129355 kubelet[2670]: E0909 23:52:08.129328 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.129355 kubelet[2670]: W0909 23:52:08.129350 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.129437 kubelet[2670]: E0909 23:52:08.129367 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.129602 kubelet[2670]: E0909 23:52:08.129582 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.129602 kubelet[2670]: W0909 23:52:08.129595 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.129602 kubelet[2670]: E0909 23:52:08.129604 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.129794 kubelet[2670]: E0909 23:52:08.129775 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.129794 kubelet[2670]: W0909 23:52:08.129789 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.129854 kubelet[2670]: E0909 23:52:08.129798 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.129998 kubelet[2670]: E0909 23:52:08.129929 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.129998 kubelet[2670]: W0909 23:52:08.129941 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.129998 kubelet[2670]: E0909 23:52:08.129949 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.130160 kubelet[2670]: E0909 23:52:08.130083 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.130160 kubelet[2670]: W0909 23:52:08.130096 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.130160 kubelet[2670]: E0909 23:52:08.130104 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.130246 kubelet[2670]: E0909 23:52:08.130238 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.130270 kubelet[2670]: W0909 23:52:08.130245 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.130270 kubelet[2670]: E0909 23:52:08.130254 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.130992 kubelet[2670]: E0909 23:52:08.130442 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.130992 kubelet[2670]: W0909 23:52:08.130455 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.130992 kubelet[2670]: E0909 23:52:08.130464 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.138000 containerd[1537]: time="2025-09-09T23:52:08.137958299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 23:52:08.193546 containerd[1537]: time="2025-09-09T23:52:08.193504178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5625,Uid:81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:08.227297 containerd[1537]: time="2025-09-09T23:52:08.226508019Z" level=info msg="connecting to shim 284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6" address="unix:///run/containerd/s/0d3541b1dba270fad126e9d4eabdc0000572490dd660da3846599d409bd05745" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:08.230685 kubelet[2670]: E0909 23:52:08.230660 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.230836 kubelet[2670]: W0909 23:52:08.230821 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.230949 kubelet[2670]: E0909 23:52:08.230936 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.231443 kubelet[2670]: E0909 23:52:08.231428 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.231538 kubelet[2670]: W0909 23:52:08.231524 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.231631 kubelet[2670]: E0909 23:52:08.231615 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.231886 kubelet[2670]: E0909 23:52:08.231866 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.231886 kubelet[2670]: W0909 23:52:08.231884 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.231958 kubelet[2670]: E0909 23:52:08.231906 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.232081 kubelet[2670]: E0909 23:52:08.232069 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.232081 kubelet[2670]: W0909 23:52:08.232080 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.232132 kubelet[2670]: E0909 23:52:08.232096 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.232258 kubelet[2670]: E0909 23:52:08.232246 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.232258 kubelet[2670]: W0909 23:52:08.232257 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.232347 kubelet[2670]: E0909 23:52:08.232271 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.232472 kubelet[2670]: E0909 23:52:08.232456 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.232472 kubelet[2670]: W0909 23:52:08.232470 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.232534 kubelet[2670]: E0909 23:52:08.232484 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.232683 kubelet[2670]: E0909 23:52:08.232669 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.232683 kubelet[2670]: W0909 23:52:08.232682 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.232749 kubelet[2670]: E0909 23:52:08.232695 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.232981 kubelet[2670]: E0909 23:52:08.232871 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.232981 kubelet[2670]: W0909 23:52:08.232879 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.232981 kubelet[2670]: E0909 23:52:08.232955 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.233079 kubelet[2670]: E0909 23:52:08.233073 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.233107 kubelet[2670]: W0909 23:52:08.233080 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.233190 kubelet[2670]: E0909 23:52:08.233159 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.233426 kubelet[2670]: E0909 23:52:08.233415 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.233426 kubelet[2670]: W0909 23:52:08.233426 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.233484 kubelet[2670]: E0909 23:52:08.233442 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.233808 kubelet[2670]: E0909 23:52:08.233795 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.233808 kubelet[2670]: W0909 23:52:08.233807 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.233921 kubelet[2670]: E0909 23:52:08.233829 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.234060 kubelet[2670]: E0909 23:52:08.234048 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.234060 kubelet[2670]: W0909 23:52:08.234059 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.234152 kubelet[2670]: E0909 23:52:08.234136 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234272 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.234754 kubelet[2670]: W0909 23:52:08.234312 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234357 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234520 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.234754 kubelet[2670]: W0909 23:52:08.234527 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234603 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234684 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.234754 kubelet[2670]: W0909 23:52:08.234690 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.234754 kubelet[2670]: E0909 23:52:08.234726 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.234994 kubelet[2670]: E0909 23:52:08.234931 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.234994 kubelet[2670]: W0909 23:52:08.234942 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.234994 kubelet[2670]: E0909 23:52:08.234955 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.235244 kubelet[2670]: E0909 23:52:08.235229 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.235392 kubelet[2670]: W0909 23:52:08.235365 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.235862 kubelet[2670]: E0909 23:52:08.235770 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.235975 kubelet[2670]: E0909 23:52:08.235963 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.236029 kubelet[2670]: W0909 23:52:08.236019 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.236234 kubelet[2670]: E0909 23:52:08.236095 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.236352 kubelet[2670]: E0909 23:52:08.236340 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.236413 kubelet[2670]: W0909 23:52:08.236401 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.236656 kubelet[2670]: E0909 23:52:08.236493 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.237050 kubelet[2670]: E0909 23:52:08.236925 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.237050 kubelet[2670]: W0909 23:52:08.236941 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.237050 kubelet[2670]: E0909 23:52:08.236978 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.237225 kubelet[2670]: E0909 23:52:08.237214 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.237275 kubelet[2670]: W0909 23:52:08.237265 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.237395 kubelet[2670]: E0909 23:52:08.237364 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.237584 kubelet[2670]: E0909 23:52:08.237562 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.237643 kubelet[2670]: W0909 23:52:08.237633 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.237761 kubelet[2670]: E0909 23:52:08.237734 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.237880 kubelet[2670]: E0909 23:52:08.237864 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.238028 kubelet[2670]: W0909 23:52:08.237928 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.238028 kubelet[2670]: E0909 23:52:08.237963 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.238164 kubelet[2670]: E0909 23:52:08.238152 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.238215 kubelet[2670]: W0909 23:52:08.238204 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.238264 kubelet[2670]: E0909 23:52:08.238255 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.240699 kubelet[2670]: E0909 23:52:08.240675 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.240699 kubelet[2670]: W0909 23:52:08.240698 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.240796 kubelet[2670]: E0909 23:52:08.240716 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.249207 kubelet[2670]: E0909 23:52:08.249170 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:08.249207 kubelet[2670]: W0909 23:52:08.249194 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:08.249294 kubelet[2670]: E0909 23:52:08.249212 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:08.273598 systemd[1]: Started cri-containerd-284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6.scope - libcontainer container 284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6. Sep 9 23:52:08.354563 containerd[1537]: time="2025-09-09T23:52:08.354526252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5625,Uid:81c69c70-08a1-4ce3-a49c-aca5a6e8dd3f,Namespace:calico-system,Attempt:0,} returns sandbox id \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\"" Sep 9 23:52:09.022659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653169024.mount: Deactivated successfully. Sep 9 23:52:09.510807 kubelet[2670]: E0909 23:52:09.510754 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:09.882593 containerd[1537]: time="2025-09-09T23:52:09.882359190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.883450 containerd[1537]: time="2025-09-09T23:52:09.883422450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 23:52:09.884263 containerd[1537]: time="2025-09-09T23:52:09.884236972Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.886125 containerd[1537]: time="2025-09-09T23:52:09.886097788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.886971 containerd[1537]: time="2025-09-09T23:52:09.886944242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.748943326s" Sep 9 23:52:09.887018 containerd[1537]: time="2025-09-09T23:52:09.886980016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 23:52:09.889274 containerd[1537]: time="2025-09-09T23:52:09.888199338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 23:52:09.898042 containerd[1537]: time="2025-09-09T23:52:09.897994369Z" level=info msg="CreateContainer within sandbox \"0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 23:52:09.903972 containerd[1537]: time="2025-09-09T23:52:09.903928315Z" level=info msg="Container 11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:09.917543 containerd[1537]: time="2025-09-09T23:52:09.917448618Z" level=info msg="CreateContainer within sandbox \"0868e71726c64d9d56a46d62244c4eb6d4e5dc07b144fd5cba1b9b60283d1851\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2\"" Sep 9 23:52:09.918221 containerd[1537]: time="2025-09-09T23:52:09.917953258Z" level=info msg="StartContainer for \"11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2\"" Sep 9 23:52:09.919017 containerd[1537]: time="2025-09-09T23:52:09.918988267Z" level=info msg="connecting to shim 11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2" address="unix:///run/containerd/s/0d2cfc715bd05bf46069c785526f04b615d6cc6c80afbd0d83fdb57619dbca7e" protocol=ttrpc version=3 Sep 9 23:52:09.940476 systemd[1]: Started cri-containerd-11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2.scope - libcontainer container 11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2. Sep 9 23:52:09.981211 containerd[1537]: time="2025-09-09T23:52:09.981161559Z" level=info msg="StartContainer for \"11a9799ba054276baf9ffb31247d2b43fbdcccc0d35de06292dbc8a469ae40d2\" returns successfully" Sep 9 23:52:10.603463 kubelet[2670]: I0909 23:52:10.603251 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b46cf49fc-qkg2b" podStartSLOduration=1.84924396 podStartE2EDuration="3.603137397s" podCreationTimestamp="2025-09-09 23:52:07 +0000 UTC" firstStartedPulling="2025-09-09 23:52:08.133899181 +0000 UTC m=+20.714652222" lastFinishedPulling="2025-09-09 23:52:09.887792578 +0000 UTC m=+22.468545659" observedRunningTime="2025-09-09 23:52:10.602661377 +0000 UTC m=+23.183414418" watchObservedRunningTime="2025-09-09 23:52:10.603137397 +0000 UTC m=+23.183890398" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.641981 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.642383 kubelet[2670]: W0909 23:52:10.642003 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.642029 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.642198 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.642383 kubelet[2670]: W0909 23:52:10.642205 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.642213 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.642378 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.642383 kubelet[2670]: W0909 23:52:10.642387 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.642383 kubelet[2670]: E0909 23:52:10.642398 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.642706 kubelet[2670]: E0909 23:52:10.642563 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.642706 kubelet[2670]: W0909 23:52:10.642571 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.642706 kubelet[2670]: E0909 23:52:10.642580 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.642869 kubelet[2670]: E0909 23:52:10.642804 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.642869 kubelet[2670]: W0909 23:52:10.642828 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.642869 kubelet[2670]: E0909 23:52:10.642837 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643010 kubelet[2670]: E0909 23:52:10.642996 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643010 kubelet[2670]: W0909 23:52:10.643007 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643080 kubelet[2670]: E0909 23:52:10.643015 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643178 kubelet[2670]: E0909 23:52:10.643167 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643178 kubelet[2670]: W0909 23:52:10.643176 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643233 kubelet[2670]: E0909 23:52:10.643184 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643347 kubelet[2670]: E0909 23:52:10.643334 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643347 kubelet[2670]: W0909 23:52:10.643345 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643421 kubelet[2670]: E0909 23:52:10.643354 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643535 kubelet[2670]: E0909 23:52:10.643504 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643535 kubelet[2670]: W0909 23:52:10.643514 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643535 kubelet[2670]: E0909 23:52:10.643533 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643688 kubelet[2670]: E0909 23:52:10.643677 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643688 kubelet[2670]: W0909 23:52:10.643687 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643688 kubelet[2670]: E0909 23:52:10.643695 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.643843 kubelet[2670]: E0909 23:52:10.643831 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.643843 kubelet[2670]: W0909 23:52:10.643840 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.643913 kubelet[2670]: E0909 23:52:10.643847 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.645681 kubelet[2670]: E0909 23:52:10.645655 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.645681 kubelet[2670]: W0909 23:52:10.645676 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.645784 kubelet[2670]: E0909 23:52:10.645709 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.645949 kubelet[2670]: E0909 23:52:10.645918 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.645949 kubelet[2670]: W0909 23:52:10.645932 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.645949 kubelet[2670]: E0909 23:52:10.645941 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.646101 kubelet[2670]: E0909 23:52:10.646090 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.646101 kubelet[2670]: W0909 23:52:10.646100 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.646159 kubelet[2670]: E0909 23:52:10.646108 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.646264 kubelet[2670]: E0909 23:52:10.646253 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.646264 kubelet[2670]: W0909 23:52:10.646264 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.646346 kubelet[2670]: E0909 23:52:10.646271 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.652908 kubelet[2670]: E0909 23:52:10.652850 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.652908 kubelet[2670]: W0909 23:52:10.652869 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.652908 kubelet[2670]: E0909 23:52:10.652884 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.653304 kubelet[2670]: E0909 23:52:10.653271 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.653469 kubelet[2670]: W0909 23:52:10.653367 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.653469 kubelet[2670]: E0909 23:52:10.653409 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.653652 kubelet[2670]: E0909 23:52:10.653630 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.653683 kubelet[2670]: W0909 23:52:10.653652 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.653721 kubelet[2670]: E0909 23:52:10.653687 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.653883 kubelet[2670]: E0909 23:52:10.653868 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.653914 kubelet[2670]: W0909 23:52:10.653882 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.653943 kubelet[2670]: E0909 23:52:10.653915 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.654109 kubelet[2670]: E0909 23:52:10.654095 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.654109 kubelet[2670]: W0909 23:52:10.654108 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.654235 kubelet[2670]: E0909 23:52:10.654126 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.654324 kubelet[2670]: E0909 23:52:10.654312 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.654324 kubelet[2670]: W0909 23:52:10.654323 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.654391 kubelet[2670]: E0909 23:52:10.654336 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654571 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.655083 kubelet[2670]: W0909 23:52:10.654586 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654600 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654779 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.655083 kubelet[2670]: W0909 23:52:10.654789 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654809 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654972 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.655083 kubelet[2670]: W0909 23:52:10.654979 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.655083 kubelet[2670]: E0909 23:52:10.654992 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.655336 kubelet[2670]: E0909 23:52:10.655197 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.655336 kubelet[2670]: W0909 23:52:10.655207 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.655336 kubelet[2670]: E0909 23:52:10.655217 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.655706 kubelet[2670]: E0909 23:52:10.655637 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.655706 kubelet[2670]: W0909 23:52:10.655655 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.655706 kubelet[2670]: E0909 23:52:10.655669 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.656021 kubelet[2670]: E0909 23:52:10.655982 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.656021 kubelet[2670]: W0909 23:52:10.656015 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.656227 kubelet[2670]: E0909 23:52:10.656037 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.656505 kubelet[2670]: E0909 23:52:10.656488 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.656698 kubelet[2670]: W0909 23:52:10.656575 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.656698 kubelet[2670]: E0909 23:52:10.656605 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.656981 kubelet[2670]: E0909 23:52:10.656963 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.657172 kubelet[2670]: W0909 23:52:10.657050 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.657172 kubelet[2670]: E0909 23:52:10.657080 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.657378 kubelet[2670]: E0909 23:52:10.657363 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.657480 kubelet[2670]: W0909 23:52:10.657466 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.657621 kubelet[2670]: E0909 23:52:10.657606 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.657926 kubelet[2670]: E0909 23:52:10.657906 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.657963 kubelet[2670]: W0909 23:52:10.657926 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.657963 kubelet[2670]: E0909 23:52:10.657944 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.658345 kubelet[2670]: E0909 23:52:10.658273 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.658388 kubelet[2670]: W0909 23:52:10.658346 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.658388 kubelet[2670]: E0909 23:52:10.658378 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:10.659520 kubelet[2670]: E0909 23:52:10.659498 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:10.659574 kubelet[2670]: W0909 23:52:10.659519 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:10.659574 kubelet[2670]: E0909 23:52:10.659552 2670 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:11.086819 containerd[1537]: time="2025-09-09T23:52:11.086612285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.088264 containerd[1537]: time="2025-09-09T23:52:11.088223029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 23:52:11.089561 containerd[1537]: time="2025-09-09T23:52:11.089525140Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.091635 containerd[1537]: time="2025-09-09T23:52:11.091586847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.092336 containerd[1537]: time="2025-09-09T23:52:11.092296704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.202995371s" Sep 9 23:52:11.092385 containerd[1537]: time="2025-09-09T23:52:11.092337799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 23:52:11.100422 containerd[1537]: time="2025-09-09T23:52:11.100373551Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 23:52:11.109465 containerd[1537]: time="2025-09-09T23:52:11.109400862Z" level=info msg="Container 646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:11.118259 containerd[1537]: time="2025-09-09T23:52:11.118209773Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\"" Sep 9 23:52:11.120079 containerd[1537]: time="2025-09-09T23:52:11.119999902Z" level=info msg="StartContainer for \"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\"" Sep 9 23:52:11.128046 containerd[1537]: time="2025-09-09T23:52:11.128004922Z" level=info msg="connecting to shim 646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0" address="unix:///run/containerd/s/0d3541b1dba270fad126e9d4eabdc0000572490dd660da3846599d409bd05745" protocol=ttrpc version=3 Sep 9 23:52:11.150460 systemd[1]: Started cri-containerd-646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0.scope - libcontainer container 646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0. Sep 9 23:52:11.195991 containerd[1537]: time="2025-09-09T23:52:11.195928013Z" level=info msg="StartContainer for \"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\" returns successfully" Sep 9 23:52:11.197423 systemd[1]: cri-containerd-646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0.scope: Deactivated successfully. Sep 9 23:52:11.208880 containerd[1537]: time="2025-09-09T23:52:11.208839251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\" id:\"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\" pid:3368 exited_at:{seconds:1757461931 nanos:207107343}" Sep 9 23:52:11.213930 containerd[1537]: time="2025-09-09T23:52:11.213885479Z" level=info msg="received exit event container_id:\"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\" id:\"646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0\" pid:3368 exited_at:{seconds:1757461931 nanos:207107343}" Sep 9 23:52:11.250422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646d03d49bac8ae2d9a67108464711da84647b35ba57984e5ef0516b5b2b17a0-rootfs.mount: Deactivated successfully. Sep 9 23:52:11.509865 kubelet[2670]: E0909 23:52:11.509533 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:11.600501 kubelet[2670]: I0909 23:52:11.600458 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:11.603231 containerd[1537]: time="2025-09-09T23:52:11.603021673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 23:52:13.512154 kubelet[2670]: E0909 23:52:13.512114 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:15.094228 containerd[1537]: time="2025-09-09T23:52:15.094149982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:15.094844 containerd[1537]: time="2025-09-09T23:52:15.094809385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 23:52:15.095824 containerd[1537]: time="2025-09-09T23:52:15.095793969Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:15.098308 containerd[1537]: time="2025-09-09T23:52:15.098255967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:15.098948 containerd[1537]: time="2025-09-09T23:52:15.098921412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.495859125s" Sep 9 23:52:15.098997 containerd[1537]: time="2025-09-09T23:52:15.098954342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 23:52:15.102794 containerd[1537]: time="2025-09-09T23:52:15.102751232Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 23:52:15.112693 containerd[1537]: time="2025-09-09T23:52:15.112650200Z" level=info msg="Container 63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:15.128317 containerd[1537]: time="2025-09-09T23:52:15.128233560Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\"" Sep 9 23:52:15.128841 containerd[1537]: time="2025-09-09T23:52:15.128818540Z" level=info msg="StartContainer for \"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\"" Sep 9 23:52:15.130883 containerd[1537]: time="2025-09-09T23:52:15.130590166Z" level=info msg="connecting to shim 63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec" address="unix:///run/containerd/s/0d3541b1dba270fad126e9d4eabdc0000572490dd660da3846599d409bd05745" protocol=ttrpc version=3 Sep 9 23:52:15.150481 systemd[1]: Started cri-containerd-63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec.scope - libcontainer container 63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec. Sep 9 23:52:15.188319 containerd[1537]: time="2025-09-09T23:52:15.188238162Z" level=info msg="StartContainer for \"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\" returns successfully" Sep 9 23:52:15.512866 kubelet[2670]: E0909 23:52:15.512827 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:15.760473 systemd[1]: cri-containerd-63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec.scope: Deactivated successfully. Sep 9 23:52:15.760776 systemd[1]: cri-containerd-63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec.scope: Consumed 470ms CPU time, 176.3M memory peak, 2.7M read from disk, 165.8M written to disk. Sep 9 23:52:15.762158 containerd[1537]: time="2025-09-09T23:52:15.762114636Z" level=info msg="received exit event container_id:\"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\" id:\"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\" pid:3430 exited_at:{seconds:1757461935 nanos:761861998}" Sep 9 23:52:15.762385 containerd[1537]: time="2025-09-09T23:52:15.762337345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\" id:\"63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec\" pid:3430 exited_at:{seconds:1757461935 nanos:761861998}" Sep 9 23:52:15.781987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63be368fad35320f4f9d43f849eb0526a6e11561f7fa3dbe87c4169b6db460ec-rootfs.mount: Deactivated successfully. Sep 9 23:52:15.788652 kubelet[2670]: I0909 23:52:15.788449 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:52:15.892761 kubelet[2670]: I0909 23:52:15.892718 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjk4x\" (UniqueName: \"kubernetes.io/projected/d46cb187-f162-4110-a2ea-d6d4a6effa16-kube-api-access-vjk4x\") pod \"calico-apiserver-6f888f5b9b-58hv6\" (UID: \"d46cb187-f162-4110-a2ea-d6d4a6effa16\") " pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" Sep 9 23:52:15.892761 kubelet[2670]: I0909 23:52:15.892759 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7-config-volume\") pod \"coredns-668d6bf9bc-d2zkf\" (UID: \"18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7\") " pod="kube-system/coredns-668d6bf9bc-d2zkf" Sep 9 23:52:15.893010 kubelet[2670]: I0909 23:52:15.892803 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-ca-bundle\") pod \"whisker-5756cbd78d-h8zs6\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " pod="calico-system/whisker-5756cbd78d-h8zs6" Sep 9 23:52:15.893010 kubelet[2670]: I0909 23:52:15.892837 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d46cb187-f162-4110-a2ea-d6d4a6effa16-calico-apiserver-certs\") pod \"calico-apiserver-6f888f5b9b-58hv6\" (UID: \"d46cb187-f162-4110-a2ea-d6d4a6effa16\") " pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" Sep 9 23:52:15.893010 kubelet[2670]: I0909 23:52:15.892888 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc8d9799-b132-4123-89ce-8fd0e5cf7847-tigera-ca-bundle\") pod \"calico-kube-controllers-669764c947-zn8lm\" (UID: \"cc8d9799-b132-4123-89ce-8fd0e5cf7847\") " pod="calico-system/calico-kube-controllers-669764c947-zn8lm" Sep 9 23:52:15.893010 kubelet[2670]: I0909 23:52:15.892930 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzbnr\" (UniqueName: \"kubernetes.io/projected/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-kube-api-access-dzbnr\") pod \"whisker-5756cbd78d-h8zs6\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " pod="calico-system/whisker-5756cbd78d-h8zs6" Sep 9 23:52:15.893010 kubelet[2670]: I0909 23:52:15.892951 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-backend-key-pair\") pod \"whisker-5756cbd78d-h8zs6\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " pod="calico-system/whisker-5756cbd78d-h8zs6" Sep 9 23:52:15.893133 kubelet[2670]: I0909 23:52:15.892973 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4-config-volume\") pod \"coredns-668d6bf9bc-nl69b\" (UID: \"ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4\") " pod="kube-system/coredns-668d6bf9bc-nl69b" Sep 9 23:52:15.893133 kubelet[2670]: I0909 23:52:15.892994 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2gv\" (UniqueName: \"kubernetes.io/projected/ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4-kube-api-access-sw2gv\") pod \"coredns-668d6bf9bc-nl69b\" (UID: \"ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4\") " pod="kube-system/coredns-668d6bf9bc-nl69b" Sep 9 23:52:15.893133 kubelet[2670]: I0909 23:52:15.893016 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69ng\" (UniqueName: \"kubernetes.io/projected/18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7-kube-api-access-m69ng\") pod \"coredns-668d6bf9bc-d2zkf\" (UID: \"18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7\") " pod="kube-system/coredns-668d6bf9bc-d2zkf" Sep 9 23:52:15.893133 kubelet[2670]: I0909 23:52:15.893048 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdwq\" (UniqueName: \"kubernetes.io/projected/cc8d9799-b132-4123-89ce-8fd0e5cf7847-kube-api-access-nxdwq\") pod \"calico-kube-controllers-669764c947-zn8lm\" (UID: \"cc8d9799-b132-4123-89ce-8fd0e5cf7847\") " pod="calico-system/calico-kube-controllers-669764c947-zn8lm" Sep 9 23:52:15.906125 systemd[1]: Created slice kubepods-burstable-podee1e6ae6_30e0_4905_8cb1_5c4e99d07cc4.slice - libcontainer container kubepods-burstable-podee1e6ae6_30e0_4905_8cb1_5c4e99d07cc4.slice. Sep 9 23:52:15.928176 systemd[1]: Created slice kubepods-burstable-pod18a25db4_ea4e_4b4f_a6d8_c7abbf49ccc7.slice - libcontainer container kubepods-burstable-pod18a25db4_ea4e_4b4f_a6d8_c7abbf49ccc7.slice. Sep 9 23:52:15.935665 systemd[1]: Created slice kubepods-besteffort-podcc8d9799_b132_4123_89ce_8fd0e5cf7847.slice - libcontainer container kubepods-besteffort-podcc8d9799_b132_4123_89ce_8fd0e5cf7847.slice. Sep 9 23:52:15.944131 systemd[1]: Created slice kubepods-besteffort-podd6b9a9c5_0483_4748_b9cf_14db01b82a3b.slice - libcontainer container kubepods-besteffort-podd6b9a9c5_0483_4748_b9cf_14db01b82a3b.slice. Sep 9 23:52:15.949201 systemd[1]: Created slice kubepods-besteffort-podd46cb187_f162_4110_a2ea_d6d4a6effa16.slice - libcontainer container kubepods-besteffort-podd46cb187_f162_4110_a2ea_d6d4a6effa16.slice. Sep 9 23:52:15.956833 systemd[1]: Created slice kubepods-besteffort-pod1cfc1f33_b322_43b3_8dca_b4d7f224158e.slice - libcontainer container kubepods-besteffort-pod1cfc1f33_b322_43b3_8dca_b4d7f224158e.slice. Sep 9 23:52:15.964429 systemd[1]: Created slice kubepods-besteffort-pod9079e24f_6260_49d1_81ae_76ffac9f3325.slice - libcontainer container kubepods-besteffort-pod9079e24f_6260_49d1_81ae_76ffac9f3325.slice. Sep 9 23:52:15.969089 systemd[1]: Created slice kubepods-besteffort-pod3f44a001_e225_4e57_afa7_a134d2ce290b.slice - libcontainer container kubepods-besteffort-pod3f44a001_e225_4e57_afa7_a134d2ce290b.slice. Sep 9 23:52:15.994303 kubelet[2670]: I0909 23:52:15.994203 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f44a001-e225-4e57-afa7-a134d2ce290b-calico-apiserver-certs\") pod \"calico-apiserver-5f79848fcd-tbt2p\" (UID: \"3f44a001-e225-4e57-afa7-a134d2ce290b\") " pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" Sep 9 23:52:15.995139 kubelet[2670]: I0909 23:52:15.994430 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwfx\" (UniqueName: \"kubernetes.io/projected/1cfc1f33-b322-43b3-8dca-b4d7f224158e-kube-api-access-7jwfx\") pod \"goldmane-54d579b49d-46xdr\" (UID: \"1cfc1f33-b322-43b3-8dca-b4d7f224158e\") " pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:15.995139 kubelet[2670]: I0909 23:52:15.994471 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pv8l\" (UniqueName: \"kubernetes.io/projected/3f44a001-e225-4e57-afa7-a134d2ce290b-kube-api-access-2pv8l\") pod \"calico-apiserver-5f79848fcd-tbt2p\" (UID: \"3f44a001-e225-4e57-afa7-a134d2ce290b\") " pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" Sep 9 23:52:15.995139 kubelet[2670]: I0909 23:52:15.994526 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cfc1f33-b322-43b3-8dca-b4d7f224158e-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-46xdr\" (UID: \"1cfc1f33-b322-43b3-8dca-b4d7f224158e\") " pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:15.995139 kubelet[2670]: I0909 23:52:15.994542 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1cfc1f33-b322-43b3-8dca-b4d7f224158e-goldmane-key-pair\") pod \"goldmane-54d579b49d-46xdr\" (UID: \"1cfc1f33-b322-43b3-8dca-b4d7f224158e\") " pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:15.995139 kubelet[2670]: I0909 23:52:15.994583 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfc1f33-b322-43b3-8dca-b4d7f224158e-config\") pod \"goldmane-54d579b49d-46xdr\" (UID: \"1cfc1f33-b322-43b3-8dca-b4d7f224158e\") " pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:15.995319 kubelet[2670]: I0909 23:52:15.994634 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27q4k\" (UniqueName: \"kubernetes.io/projected/9079e24f-6260-49d1-81ae-76ffac9f3325-kube-api-access-27q4k\") pod \"calico-apiserver-5f79848fcd-kzrk6\" (UID: \"9079e24f-6260-49d1-81ae-76ffac9f3325\") " pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" Sep 9 23:52:15.995319 kubelet[2670]: I0909 23:52:15.994653 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9079e24f-6260-49d1-81ae-76ffac9f3325-calico-apiserver-certs\") pod \"calico-apiserver-5f79848fcd-kzrk6\" (UID: \"9079e24f-6260-49d1-81ae-76ffac9f3325\") " pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" Sep 9 23:52:16.224574 containerd[1537]: time="2025-09-09T23:52:16.224499238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nl69b,Uid:ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:16.234556 containerd[1537]: time="2025-09-09T23:52:16.234518128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2zkf,Uid:18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:16.241314 containerd[1537]: time="2025-09-09T23:52:16.241257767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669764c947-zn8lm,Uid:cc8d9799-b132-4123-89ce-8fd0e5cf7847,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:16.247002 containerd[1537]: time="2025-09-09T23:52:16.246956216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5756cbd78d-h8zs6,Uid:d6b9a9c5-0483-4748-b9cf-14db01b82a3b,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:16.257006 containerd[1537]: time="2025-09-09T23:52:16.256270417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f888f5b9b-58hv6,Uid:d46cb187-f162-4110-a2ea-d6d4a6effa16,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:16.262442 containerd[1537]: time="2025-09-09T23:52:16.260895429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-46xdr,Uid:1cfc1f33-b322-43b3-8dca-b4d7f224158e,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:16.271717 containerd[1537]: time="2025-09-09T23:52:16.271660900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-kzrk6,Uid:9079e24f-6260-49d1-81ae-76ffac9f3325,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:16.276737 containerd[1537]: time="2025-09-09T23:52:16.276669785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-tbt2p,Uid:3f44a001-e225-4e57-afa7-a134d2ce290b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:16.413534 containerd[1537]: time="2025-09-09T23:52:16.413464381Z" level=error msg="Failed to destroy network for sandbox \"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.419485 containerd[1537]: time="2025-09-09T23:52:16.419394139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f888f5b9b-58hv6,Uid:d46cb187-f162-4110-a2ea-d6d4a6effa16,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.431336 kubelet[2670]: E0909 23:52:16.431257 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.435193 containerd[1537]: time="2025-09-09T23:52:16.434733727Z" level=error msg="Failed to destroy network for sandbox \"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.435386 containerd[1537]: time="2025-09-09T23:52:16.434857204Z" level=error msg="Failed to destroy network for sandbox \"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.436819 kubelet[2670]: E0909 23:52:16.436361 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" Sep 9 23:52:16.436819 kubelet[2670]: E0909 23:52:16.436433 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" Sep 9 23:52:16.436819 kubelet[2670]: E0909 23:52:16.436497 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f888f5b9b-58hv6_calico-apiserver(d46cb187-f162-4110-a2ea-d6d4a6effa16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f888f5b9b-58hv6_calico-apiserver(d46cb187-f162-4110-a2ea-d6d4a6effa16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd11a1969b1c05255e941d8f2294abd3a0ce7a64f6418a885bf352ccd7dbf8a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" podUID="d46cb187-f162-4110-a2ea-d6d4a6effa16" Sep 9 23:52:16.437021 containerd[1537]: time="2025-09-09T23:52:16.434967036Z" level=error msg="Failed to destroy network for sandbox \"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.437128 containerd[1537]: time="2025-09-09T23:52:16.435557211Z" level=error msg="Failed to destroy network for sandbox \"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.438387 containerd[1537]: time="2025-09-09T23:52:16.438325392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5756cbd78d-h8zs6,Uid:d6b9a9c5-0483-4748-b9cf-14db01b82a3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.438773 kubelet[2670]: E0909 23:52:16.438723 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.438861 kubelet[2670]: E0909 23:52:16.438787 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5756cbd78d-h8zs6" Sep 9 23:52:16.438861 kubelet[2670]: E0909 23:52:16.438807 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5756cbd78d-h8zs6" Sep 9 23:52:16.438913 kubelet[2670]: E0909 23:52:16.438850 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5756cbd78d-h8zs6_calico-system(d6b9a9c5-0483-4748-b9cf-14db01b82a3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5756cbd78d-h8zs6_calico-system(d6b9a9c5-0483-4748-b9cf-14db01b82a3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9348181299253536b24058a8fa1fc9dfb05e5cfb9cf8e1816b56fe01220a62a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5756cbd78d-h8zs6" podUID="d6b9a9c5-0483-4748-b9cf-14db01b82a3b" Sep 9 23:52:16.439006 containerd[1537]: time="2025-09-09T23:52:16.438760841Z" level=error msg="Failed to destroy network for sandbox \"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.439252 containerd[1537]: time="2025-09-09T23:52:16.439227539Z" level=error msg="Failed to destroy network for sandbox \"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.439778 containerd[1537]: time="2025-09-09T23:52:16.439744613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nl69b,Uid:ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.440172 kubelet[2670]: E0909 23:52:16.440130 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.441513 kubelet[2670]: E0909 23:52:16.440273 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nl69b" Sep 9 23:52:16.441513 kubelet[2670]: E0909 23:52:16.440598 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nl69b" Sep 9 23:52:16.441513 kubelet[2670]: E0909 23:52:16.440652 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nl69b_kube-system(ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nl69b_kube-system(ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6230ff5b20de61c0223a4fb568e5ac65e1aac85314aa578f652dc7740f5c81dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nl69b" podUID="ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4" Sep 9 23:52:16.442725 containerd[1537]: time="2025-09-09T23:52:16.442680043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-tbt2p,Uid:3f44a001-e225-4e57-afa7-a134d2ce290b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.443498 kubelet[2670]: E0909 23:52:16.443456 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.443570 kubelet[2670]: E0909 23:52:16.443520 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" Sep 9 23:52:16.443570 kubelet[2670]: E0909 23:52:16.443552 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" Sep 9 23:52:16.443632 kubelet[2670]: E0909 23:52:16.443593 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f79848fcd-tbt2p_calico-apiserver(3f44a001-e225-4e57-afa7-a134d2ce290b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f79848fcd-tbt2p_calico-apiserver(3f44a001-e225-4e57-afa7-a134d2ce290b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09fe51f570383dec48b60d66927cba79e7bfe50d89b27a97b5975adad42af5be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" podUID="3f44a001-e225-4e57-afa7-a134d2ce290b" Sep 9 23:52:16.444469 containerd[1537]: time="2025-09-09T23:52:16.444410636Z" level=error msg="Failed to destroy network for sandbox \"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.461698 containerd[1537]: time="2025-09-09T23:52:16.461630381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-kzrk6,Uid:9079e24f-6260-49d1-81ae-76ffac9f3325,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.461972 kubelet[2670]: E0909 23:52:16.461912 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.462052 kubelet[2670]: E0909 23:52:16.461975 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" Sep 9 23:52:16.462052 kubelet[2670]: E0909 23:52:16.461996 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" Sep 9 23:52:16.462102 kubelet[2670]: E0909 23:52:16.462039 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f79848fcd-kzrk6_calico-apiserver(9079e24f-6260-49d1-81ae-76ffac9f3325)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f79848fcd-kzrk6_calico-apiserver(9079e24f-6260-49d1-81ae-76ffac9f3325)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f326cd1db334aee3ea7c21169167f16e2ad96081543d74b6f88478078a2badf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" podUID="9079e24f-6260-49d1-81ae-76ffac9f3325" Sep 9 23:52:16.470189 containerd[1537]: time="2025-09-09T23:52:16.470006264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669764c947-zn8lm,Uid:cc8d9799-b132-4123-89ce-8fd0e5cf7847,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.470584 kubelet[2670]: E0909 23:52:16.470466 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.470584 kubelet[2670]: E0909 23:52:16.470521 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669764c947-zn8lm" Sep 9 23:52:16.470584 kubelet[2670]: E0909 23:52:16.470540 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-669764c947-zn8lm" Sep 9 23:52:16.470707 kubelet[2670]: E0909 23:52:16.470580 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-669764c947-zn8lm_calico-system(cc8d9799-b132-4123-89ce-8fd0e5cf7847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-669764c947-zn8lm_calico-system(cc8d9799-b132-4123-89ce-8fd0e5cf7847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af5e3b0ed44e9637f9b60afc33206fc1da831d20545f897a56a13b693aa68eed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-669764c947-zn8lm" podUID="cc8d9799-b132-4123-89ce-8fd0e5cf7847" Sep 9 23:52:16.482057 containerd[1537]: time="2025-09-09T23:52:16.481736382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-46xdr,Uid:1cfc1f33-b322-43b3-8dca-b4d7f224158e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.483584 kubelet[2670]: E0909 23:52:16.482636 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.483584 kubelet[2670]: E0909 23:52:16.482869 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:16.483584 kubelet[2670]: E0909 23:52:16.482907 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-46xdr" Sep 9 23:52:16.483755 kubelet[2670]: E0909 23:52:16.482956 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-46xdr_calico-system(1cfc1f33-b322-43b3-8dca-b4d7f224158e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-46xdr_calico-system(1cfc1f33-b322-43b3-8dca-b4d7f224158e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a06e44266c0afd0805fae1490d3814eb4625383b85b2a2f90f479c24c00133\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-46xdr" podUID="1cfc1f33-b322-43b3-8dca-b4d7f224158e" Sep 9 23:52:16.486286 containerd[1537]: time="2025-09-09T23:52:16.486210148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2zkf,Uid:18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.486623 kubelet[2670]: E0909 23:52:16.486563 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:16.486623 kubelet[2670]: E0909 23:52:16.486619 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2zkf" Sep 9 23:52:16.486623 kubelet[2670]: E0909 23:52:16.486640 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2zkf" Sep 9 23:52:16.486791 kubelet[2670]: E0909 23:52:16.486676 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d2zkf_kube-system(18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d2zkf_kube-system(18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17507cd0e9ba5e430871dd34538d2a1c1fcd9de2e52df32ea23e2742730f1f31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d2zkf" podUID="18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7" Sep 9 23:52:16.627057 containerd[1537]: time="2025-09-09T23:52:16.626967959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 23:52:17.114379 systemd[1]: run-netns-cni\x2d2b42ad45\x2d9c32\x2d42ba\x2dd56c\x2d64610f74f170.mount: Deactivated successfully. Sep 9 23:52:17.114473 systemd[1]: run-netns-cni\x2d1e19f2a6\x2d84dc\x2db34f\x2d50bf\x2d95e213ec79d6.mount: Deactivated successfully. Sep 9 23:52:17.114520 systemd[1]: run-netns-cni\x2d5f002ef5\x2d6ba8\x2dc243\x2d8582\x2dfac063824eba.mount: Deactivated successfully. Sep 9 23:52:17.114565 systemd[1]: run-netns-cni\x2dcaadf7c1\x2d906b\x2d6904\x2d67e1\x2dcc025b90ff32.mount: Deactivated successfully. Sep 9 23:52:17.517486 systemd[1]: Created slice kubepods-besteffort-podacbcd775_647a_401e_8ee5_46577cb61830.slice - libcontainer container kubepods-besteffort-podacbcd775_647a_401e_8ee5_46577cb61830.slice. Sep 9 23:52:17.520847 containerd[1537]: time="2025-09-09T23:52:17.520805748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgp8h,Uid:acbcd775-647a-401e-8ee5-46577cb61830,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:17.581323 containerd[1537]: time="2025-09-09T23:52:17.581258057Z" level=error msg="Failed to destroy network for sandbox \"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:17.583141 systemd[1]: run-netns-cni\x2d5209495b\x2d7be1\x2d6f95\x2d7056\x2d410dbdd4ad2e.mount: Deactivated successfully. Sep 9 23:52:17.586306 containerd[1537]: time="2025-09-09T23:52:17.586224355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgp8h,Uid:acbcd775-647a-401e-8ee5-46577cb61830,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:17.586572 kubelet[2670]: E0909 23:52:17.586501 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:17.586914 kubelet[2670]: E0909 23:52:17.586586 2670 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:17.586914 kubelet[2670]: E0909 23:52:17.586616 2670 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fgp8h" Sep 9 23:52:17.586914 kubelet[2670]: E0909 23:52:17.586668 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fgp8h_calico-system(acbcd775-647a-401e-8ee5-46577cb61830)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fgp8h_calico-system(acbcd775-647a-401e-8ee5-46577cb61830)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be8023082a2264586ccd698e25153c697982ad3165fc0f9427fbb08d37d5f8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fgp8h" podUID="acbcd775-647a-401e-8ee5-46577cb61830" Sep 9 23:52:20.512464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384390611.mount: Deactivated successfully. Sep 9 23:52:20.813423 containerd[1537]: time="2025-09-09T23:52:20.812928708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:20.813815 containerd[1537]: time="2025-09-09T23:52:20.813722872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 23:52:20.815673 containerd[1537]: time="2025-09-09T23:52:20.815607037Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:20.829151 containerd[1537]: time="2025-09-09T23:52:20.829078581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:20.830001 containerd[1537]: time="2025-09-09T23:52:20.829956367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.202949716s" Sep 9 23:52:20.830001 containerd[1537]: time="2025-09-09T23:52:20.829997977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 23:52:20.844167 containerd[1537]: time="2025-09-09T23:52:20.843521695Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 23:52:20.854518 containerd[1537]: time="2025-09-09T23:52:20.854468309Z" level=info msg="Container 0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:20.861006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628586428.mount: Deactivated successfully. Sep 9 23:52:20.879008 containerd[1537]: time="2025-09-09T23:52:20.878872384Z" level=info msg="CreateContainer within sandbox \"284bff0b8b74fa2b4b2e608239543388bbc233558cfd5f00270d847863405bf6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\"" Sep 9 23:52:20.879566 containerd[1537]: time="2025-09-09T23:52:20.879429247Z" level=info msg="StartContainer for \"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\"" Sep 9 23:52:20.882346 containerd[1537]: time="2025-09-09T23:52:20.882137024Z" level=info msg="connecting to shim 0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026" address="unix:///run/containerd/s/0d3541b1dba270fad126e9d4eabdc0000572490dd660da3846599d409bd05745" protocol=ttrpc version=3 Sep 9 23:52:20.905467 systemd[1]: Started cri-containerd-0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026.scope - libcontainer container 0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026. Sep 9 23:52:20.975203 containerd[1537]: time="2025-09-09T23:52:20.975058836Z" level=info msg="StartContainer for \"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" returns successfully" Sep 9 23:52:21.112228 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 23:52:21.112355 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 23:52:21.341567 kubelet[2670]: I0909 23:52:21.341525 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzbnr\" (UniqueName: \"kubernetes.io/projected/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-kube-api-access-dzbnr\") pod \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " Sep 9 23:52:21.342065 kubelet[2670]: I0909 23:52:21.341581 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-backend-key-pair\") pod \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " Sep 9 23:52:21.342065 kubelet[2670]: I0909 23:52:21.341602 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-ca-bundle\") pod \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\" (UID: \"d6b9a9c5-0483-4748-b9cf-14db01b82a3b\") " Sep 9 23:52:21.345920 kubelet[2670]: I0909 23:52:21.345862 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d6b9a9c5-0483-4748-b9cf-14db01b82a3b" (UID: "d6b9a9c5-0483-4748-b9cf-14db01b82a3b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 23:52:21.350461 kubelet[2670]: I0909 23:52:21.350410 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d6b9a9c5-0483-4748-b9cf-14db01b82a3b" (UID: "d6b9a9c5-0483-4748-b9cf-14db01b82a3b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:52:21.351433 kubelet[2670]: I0909 23:52:21.351403 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-kube-api-access-dzbnr" (OuterVolumeSpecName: "kube-api-access-dzbnr") pod "d6b9a9c5-0483-4748-b9cf-14db01b82a3b" (UID: "d6b9a9c5-0483-4748-b9cf-14db01b82a3b"). InnerVolumeSpecName "kube-api-access-dzbnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:52:21.442728 kubelet[2670]: I0909 23:52:21.442674 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 23:52:21.442728 kubelet[2670]: I0909 23:52:21.442710 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 23:52:21.442728 kubelet[2670]: I0909 23:52:21.442722 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzbnr\" (UniqueName: \"kubernetes.io/projected/d6b9a9c5-0483-4748-b9cf-14db01b82a3b-kube-api-access-dzbnr\") on node \"localhost\" DevicePath \"\"" Sep 9 23:52:21.513593 systemd[1]: var-lib-kubelet-pods-d6b9a9c5\x2d0483\x2d4748\x2db9cf\x2d14db01b82a3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzbnr.mount: Deactivated successfully. Sep 9 23:52:21.513707 systemd[1]: var-lib-kubelet-pods-d6b9a9c5\x2d0483\x2d4748\x2db9cf\x2d14db01b82a3b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 23:52:21.526429 systemd[1]: Removed slice kubepods-besteffort-podd6b9a9c5_0483_4748_b9cf_14db01b82a3b.slice - libcontainer container kubepods-besteffort-podd6b9a9c5_0483_4748_b9cf_14db01b82a3b.slice. Sep 9 23:52:21.669117 kubelet[2670]: I0909 23:52:21.666901 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g5625" podStartSLOduration=2.192855948 podStartE2EDuration="14.666884767s" podCreationTimestamp="2025-09-09 23:52:07 +0000 UTC" firstStartedPulling="2025-09-09 23:52:08.356927405 +0000 UTC m=+20.937680446" lastFinishedPulling="2025-09-09 23:52:20.830956224 +0000 UTC m=+33.411709265" observedRunningTime="2025-09-09 23:52:21.66653784 +0000 UTC m=+34.247290881" watchObservedRunningTime="2025-09-09 23:52:21.666884767 +0000 UTC m=+34.247637808" Sep 9 23:52:21.736594 systemd[1]: Created slice kubepods-besteffort-pod1213785f_2d73_424b_85a3_ba7770d8f53a.slice - libcontainer container kubepods-besteffort-pod1213785f_2d73_424b_85a3_ba7770d8f53a.slice. Sep 9 23:52:21.845572 kubelet[2670]: I0909 23:52:21.845531 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1213785f-2d73-424b-85a3-ba7770d8f53a-whisker-ca-bundle\") pod \"whisker-5c9c46d97f-2d9nx\" (UID: \"1213785f-2d73-424b-85a3-ba7770d8f53a\") " pod="calico-system/whisker-5c9c46d97f-2d9nx" Sep 9 23:52:21.845572 kubelet[2670]: I0909 23:52:21.845576 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1213785f-2d73-424b-85a3-ba7770d8f53a-whisker-backend-key-pair\") pod \"whisker-5c9c46d97f-2d9nx\" (UID: \"1213785f-2d73-424b-85a3-ba7770d8f53a\") " pod="calico-system/whisker-5c9c46d97f-2d9nx" Sep 9 23:52:21.845738 kubelet[2670]: I0909 23:52:21.845594 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4l5\" (UniqueName: \"kubernetes.io/projected/1213785f-2d73-424b-85a3-ba7770d8f53a-kube-api-access-lg4l5\") pod \"whisker-5c9c46d97f-2d9nx\" (UID: \"1213785f-2d73-424b-85a3-ba7770d8f53a\") " pod="calico-system/whisker-5c9c46d97f-2d9nx" Sep 9 23:52:22.040482 containerd[1537]: time="2025-09-09T23:52:22.039800241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9c46d97f-2d9nx,Uid:1213785f-2d73-424b-85a3-ba7770d8f53a,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:22.251709 containerd[1537]: time="2025-09-09T23:52:22.251661449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" id:\"0c182314002e500edab3a0ca9d6323b488eb76f8ac40b32ff02ac8ec2ebcdb73\" pid:3873 exit_status:1 exited_at:{seconds:1757461942 nanos:250461560}" Sep 9 23:52:22.265095 systemd-networkd[1436]: calie1cc3a8f356: Link UP Sep 9 23:52:22.265339 systemd-networkd[1436]: calie1cc3a8f356: Gained carrier Sep 9 23:52:22.281633 containerd[1537]: 2025-09-09 23:52:22.063 [INFO][3839] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:22.281633 containerd[1537]: 2025-09-09 23:52:22.105 [INFO][3839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0 whisker-5c9c46d97f- calico-system 1213785f-2d73-424b-85a3-ba7770d8f53a 907 0 2025-09-09 23:52:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c9c46d97f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c9c46d97f-2d9nx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie1cc3a8f356 [] [] }} ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-" Sep 9 23:52:22.281633 containerd[1537]: 2025-09-09 23:52:22.105 [INFO][3839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.281633 containerd[1537]: 2025-09-09 23:52:22.196 [INFO][3853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" HandleID="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Workload="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.196 [INFO][3853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" HandleID="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Workload="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c9c46d97f-2d9nx", "timestamp":"2025-09-09 23:52:22.196094901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.196 [INFO][3853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.196 [INFO][3853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.196 [INFO][3853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.210 [INFO][3853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" host="localhost" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.218 [INFO][3853] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.224 [INFO][3853] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.231 [INFO][3853] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.233 [INFO][3853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:22.281849 containerd[1537]: 2025-09-09 23:52:22.233 [INFO][3853] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" host="localhost" Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.235 [INFO][3853] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646 Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.240 [INFO][3853] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" host="localhost" Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.247 [INFO][3853] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" host="localhost" Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.247 [INFO][3853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" host="localhost" Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.247 [INFO][3853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:22.282077 containerd[1537]: 2025-09-09 23:52:22.247 [INFO][3853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" HandleID="k8s-pod-network.40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Workload="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.282197 containerd[1537]: 2025-09-09 23:52:22.250 [INFO][3839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0", GenerateName:"whisker-5c9c46d97f-", Namespace:"calico-system", SelfLink:"", UID:"1213785f-2d73-424b-85a3-ba7770d8f53a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9c46d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c9c46d97f-2d9nx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1cc3a8f356", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:22.282197 containerd[1537]: 2025-09-09 23:52:22.250 [INFO][3839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.282276 containerd[1537]: 2025-09-09 23:52:22.251 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1cc3a8f356 ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.282276 containerd[1537]: 2025-09-09 23:52:22.264 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.282591 containerd[1537]: 2025-09-09 23:52:22.265 [INFO][3839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0", GenerateName:"whisker-5c9c46d97f-", Namespace:"calico-system", SelfLink:"", UID:"1213785f-2d73-424b-85a3-ba7770d8f53a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c9c46d97f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646", Pod:"whisker-5c9c46d97f-2d9nx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1cc3a8f356", MAC:"06:0c:82:05:0a:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:22.282654 containerd[1537]: 2025-09-09 23:52:22.278 [INFO][3839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" Namespace="calico-system" Pod="whisker-5c9c46d97f-2d9nx" WorkloadEndpoint="localhost-k8s-whisker--5c9c46d97f--2d9nx-eth0" Sep 9 23:52:22.352376 containerd[1537]: time="2025-09-09T23:52:22.351912925Z" level=info msg="connecting to shim 40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646" address="unix:///run/containerd/s/0856871df19cab88e67e3d4a170222f54145c675dd142bb34c7778db3969fc81" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:22.374824 containerd[1537]: time="2025-09-09T23:52:22.374774073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" id:\"ccb2153572f031b98cd99ab5823156cc2e4f63bd7ebc5cf5f3b6149504229389\" pid:3907 exit_status:1 exited_at:{seconds:1757461942 nanos:374506169}" Sep 9 23:52:22.387460 systemd[1]: Started cri-containerd-40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646.scope - libcontainer container 40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646. Sep 9 23:52:22.402262 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:22.428053 containerd[1537]: time="2025-09-09T23:52:22.427988015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9c46d97f-2d9nx,Uid:1213785f-2d73-424b-85a3-ba7770d8f53a,Namespace:calico-system,Attempt:0,} returns sandbox id \"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646\"" Sep 9 23:52:22.429836 containerd[1537]: time="2025-09-09T23:52:22.429799412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 23:52:22.752411 containerd[1537]: time="2025-09-09T23:52:22.752362694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" id:\"f21c2560c0f7fd71ac765d36c58340e977807ccf7d62ca2a0dd33979007f05f0\" pid:4077 exit_status:1 exited_at:{seconds:1757461942 nanos:752016170}" Sep 9 23:52:23.290521 containerd[1537]: time="2025-09-09T23:52:23.290463674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:23.291938 containerd[1537]: time="2025-09-09T23:52:23.291902490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 23:52:23.292902 containerd[1537]: time="2025-09-09T23:52:23.292877598Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:23.300350 containerd[1537]: time="2025-09-09T23:52:23.300104446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:23.300829 containerd[1537]: time="2025-09-09T23:52:23.300799848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 870.962868ms" Sep 9 23:52:23.300874 containerd[1537]: time="2025-09-09T23:52:23.300835817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 23:52:23.306103 containerd[1537]: time="2025-09-09T23:52:23.306046394Z" level=info msg="CreateContainer within sandbox \"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 23:52:23.328537 containerd[1537]: time="2025-09-09T23:52:23.328429983Z" level=info msg="Container ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:23.336784 containerd[1537]: time="2025-09-09T23:52:23.336729162Z" level=info msg="CreateContainer within sandbox \"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223\"" Sep 9 23:52:23.337363 containerd[1537]: time="2025-09-09T23:52:23.337332943Z" level=info msg="StartContainer for \"ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223\"" Sep 9 23:52:23.338446 containerd[1537]: time="2025-09-09T23:52:23.338419757Z" level=info msg="connecting to shim ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223" address="unix:///run/containerd/s/0856871df19cab88e67e3d4a170222f54145c675dd142bb34c7778db3969fc81" protocol=ttrpc version=3 Sep 9 23:52:23.367500 systemd[1]: Started cri-containerd-ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223.scope - libcontainer container ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223. Sep 9 23:52:23.404111 containerd[1537]: time="2025-09-09T23:52:23.404075095Z" level=info msg="StartContainer for \"ce3c4b05e4a80907cb64f2310b65da317c371dc4eca338e350ffae4b3a43c223\" returns successfully" Sep 9 23:52:23.406227 containerd[1537]: time="2025-09-09T23:52:23.406194830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 23:52:23.511925 kubelet[2670]: I0909 23:52:23.511877 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b9a9c5-0483-4748-b9cf-14db01b82a3b" path="/var/lib/kubelet/pods/d6b9a9c5-0483-4748-b9cf-14db01b82a3b/volumes" Sep 9 23:52:23.764125 containerd[1537]: time="2025-09-09T23:52:23.764079196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" id:\"a6b2df4b634a080092c3b8071196e40dabeed7a243f39ac970c5a48075261001\" pid:4144 exit_status:1 exited_at:{seconds:1757461943 nanos:761489191}" Sep 9 23:52:23.874440 systemd-networkd[1436]: calie1cc3a8f356: Gained IPv6LL Sep 9 23:52:24.624251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061292634.mount: Deactivated successfully. Sep 9 23:52:24.642646 containerd[1537]: time="2025-09-09T23:52:24.642594819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:24.644934 containerd[1537]: time="2025-09-09T23:52:24.644892460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 23:52:24.645898 containerd[1537]: time="2025-09-09T23:52:24.645862240Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:24.647996 containerd[1537]: time="2025-09-09T23:52:24.647928549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:24.648777 containerd[1537]: time="2025-09-09T23:52:24.648430623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.242201225s" Sep 9 23:52:24.648777 containerd[1537]: time="2025-09-09T23:52:24.648462830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 23:52:24.651197 containerd[1537]: time="2025-09-09T23:52:24.651151480Z" level=info msg="CreateContainer within sandbox \"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 23:52:24.660928 containerd[1537]: time="2025-09-09T23:52:24.660875404Z" level=info msg="Container 18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:24.669614 containerd[1537]: time="2025-09-09T23:52:24.669567415Z" level=info msg="CreateContainer within sandbox \"40ea4df114eb0c33d4b20bc5a5e8880308547d9990538e66c9936d00fd6fc646\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519\"" Sep 9 23:52:24.670226 containerd[1537]: time="2025-09-09T23:52:24.670191477Z" level=info msg="StartContainer for \"18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519\"" Sep 9 23:52:24.671258 containerd[1537]: time="2025-09-09T23:52:24.671233153Z" level=info msg="connecting to shim 18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519" address="unix:///run/containerd/s/0856871df19cab88e67e3d4a170222f54145c675dd142bb34c7778db3969fc81" protocol=ttrpc version=3 Sep 9 23:52:24.695471 systemd[1]: Started cri-containerd-18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519.scope - libcontainer container 18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519. Sep 9 23:52:24.740342 containerd[1537]: time="2025-09-09T23:52:24.740263564Z" level=info msg="StartContainer for \"18e7c5b26c77d4eaa7bd0e150bb8b40800c8da3b39ccb4969b3f406c642d1519\" returns successfully" Sep 9 23:52:25.688689 kubelet[2670]: I0909 23:52:25.688388 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c9c46d97f-2d9nx" podStartSLOduration=2.468522134 podStartE2EDuration="4.688368785s" podCreationTimestamp="2025-09-09 23:52:21 +0000 UTC" firstStartedPulling="2025-09-09 23:52:22.429528066 +0000 UTC m=+35.010281107" lastFinishedPulling="2025-09-09 23:52:24.649374717 +0000 UTC m=+37.230127758" observedRunningTime="2025-09-09 23:52:25.686853731 +0000 UTC m=+38.267606812" watchObservedRunningTime="2025-09-09 23:52:25.688368785 +0000 UTC m=+38.269121866" Sep 9 23:52:28.510144 containerd[1537]: time="2025-09-09T23:52:28.509824276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-46xdr,Uid:1cfc1f33-b322-43b3-8dca-b4d7f224158e,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:28.510144 containerd[1537]: time="2025-09-09T23:52:28.509905813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669764c947-zn8lm,Uid:cc8d9799-b132-4123-89ce-8fd0e5cf7847,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:28.510144 containerd[1537]: time="2025-09-09T23:52:28.510035639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2zkf,Uid:18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:28.725912 systemd-networkd[1436]: calie74aa68e5a9: Link UP Sep 9 23:52:28.726961 systemd-networkd[1436]: calie74aa68e5a9: Gained carrier Sep 9 23:52:28.742163 containerd[1537]: 2025-09-09 23:52:28.585 [INFO][4337] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:28.742163 containerd[1537]: 2025-09-09 23:52:28.603 [INFO][4337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0 calico-kube-controllers-669764c947- calico-system cc8d9799-b132-4123-89ce-8fd0e5cf7847 844 0 2025-09-09 23:52:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:669764c947 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-669764c947-zn8lm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie74aa68e5a9 [] [] }} ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-" Sep 9 23:52:28.742163 containerd[1537]: 2025-09-09 23:52:28.603 [INFO][4337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.742163 containerd[1537]: 2025-09-09 23:52:28.672 [INFO][4374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" HandleID="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Workload="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.673 [INFO][4374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" HandleID="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Workload="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006091e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-669764c947-zn8lm", "timestamp":"2025-09-09 23:52:28.672213842 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.673 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.673 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.673 [INFO][4374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.687 [INFO][4374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" host="localhost" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.695 [INFO][4374] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.699 [INFO][4374] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.703 [INFO][4374] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.706 [INFO][4374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.742481 containerd[1537]: 2025-09-09 23:52:28.706 [INFO][4374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" host="localhost" Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.708 [INFO][4374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574 Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.712 [INFO][4374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" host="localhost" Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" host="localhost" Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" host="localhost" Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:28.742833 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" HandleID="k8s-pod-network.dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Workload="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.742984 containerd[1537]: 2025-09-09 23:52:28.722 [INFO][4337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0", GenerateName:"calico-kube-controllers-669764c947-", Namespace:"calico-system", SelfLink:"", UID:"cc8d9799-b132-4123-89ce-8fd0e5cf7847", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669764c947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-669764c947-zn8lm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie74aa68e5a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.743052 containerd[1537]: 2025-09-09 23:52:28.722 [INFO][4337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.743052 containerd[1537]: 2025-09-09 23:52:28.723 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie74aa68e5a9 ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.743052 containerd[1537]: 2025-09-09 23:52:28.727 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.743124 containerd[1537]: 2025-09-09 23:52:28.727 [INFO][4337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0", GenerateName:"calico-kube-controllers-669764c947-", Namespace:"calico-system", SelfLink:"", UID:"cc8d9799-b132-4123-89ce-8fd0e5cf7847", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"669764c947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574", Pod:"calico-kube-controllers-669764c947-zn8lm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie74aa68e5a9", MAC:"3a:61:c4:21:b6:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.743179 containerd[1537]: 2025-09-09 23:52:28.739 [INFO][4337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" Namespace="calico-system" Pod="calico-kube-controllers-669764c947-zn8lm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--669764c947--zn8lm-eth0" Sep 9 23:52:28.766619 containerd[1537]: time="2025-09-09T23:52:28.765925729Z" level=info msg="connecting to shim dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574" address="unix:///run/containerd/s/a4724b50e5f2fc3a17481f905c8ecb0ede75287df9a4cf6f2d3217491093ebc5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:28.796729 systemd[1]: Started cri-containerd-dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574.scope - libcontainer container dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574. Sep 9 23:52:28.810145 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:28.830591 systemd-networkd[1436]: calic441468f66b: Link UP Sep 9 23:52:28.832082 systemd-networkd[1436]: calic441468f66b: Gained carrier Sep 9 23:52:28.840259 containerd[1537]: time="2025-09-09T23:52:28.840204186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-669764c947-zn8lm,Uid:cc8d9799-b132-4123-89ce-8fd0e5cf7847,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574\"" Sep 9 23:52:28.843461 containerd[1537]: time="2025-09-09T23:52:28.843394755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 23:52:28.851654 containerd[1537]: 2025-09-09 23:52:28.595 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:28.851654 containerd[1537]: 2025-09-09 23:52:28.636 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0 coredns-668d6bf9bc- kube-system 18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7 843 0 2025-09-09 23:51:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d2zkf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic441468f66b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-" Sep 9 23:52:28.851654 containerd[1537]: 2025-09-09 23:52:28.637 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.851654 containerd[1537]: 2025-09-09 23:52:28.694 [INFO][4383] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" HandleID="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Workload="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.694 [INFO][4383] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" HandleID="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Workload="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400013e000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d2zkf", "timestamp":"2025-09-09 23:52:28.694081807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.694 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.719 [INFO][4383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.787 [INFO][4383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" host="localhost" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.795 [INFO][4383] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.801 [INFO][4383] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.805 [INFO][4383] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.808 [INFO][4383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.851942 containerd[1537]: 2025-09-09 23:52:28.808 [INFO][4383] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" host="localhost" Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.811 [INFO][4383] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.816 [INFO][4383] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" host="localhost" Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.822 [INFO][4383] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" host="localhost" Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.822 [INFO][4383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" host="localhost" Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.822 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:28.852257 containerd[1537]: 2025-09-09 23:52:28.822 [INFO][4383] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" HandleID="k8s-pod-network.b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Workload="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.852400 containerd[1537]: 2025-09-09 23:52:28.827 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d2zkf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic441468f66b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.852486 containerd[1537]: 2025-09-09 23:52:28.827 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.852486 containerd[1537]: 2025-09-09 23:52:28.827 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic441468f66b ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.852486 containerd[1537]: 2025-09-09 23:52:28.832 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.852558 containerd[1537]: 2025-09-09 23:52:28.833 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd", Pod:"coredns-668d6bf9bc-d2zkf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic441468f66b", MAC:"32:6d:b9:30:87:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.852558 containerd[1537]: 2025-09-09 23:52:28.847 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2zkf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d2zkf-eth0" Sep 9 23:52:28.877968 containerd[1537]: time="2025-09-09T23:52:28.877875883Z" level=info msg="connecting to shim b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd" address="unix:///run/containerd/s/45366e439ecaa9eed2a292c657923bb8731e04a81850275334410db38a4a484b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:28.897440 systemd[1]: Started cri-containerd-b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd.scope - libcontainer container b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd. Sep 9 23:52:28.910857 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:28.931412 systemd-networkd[1436]: cali9b6ac353205: Link UP Sep 9 23:52:28.931893 systemd-networkd[1436]: cali9b6ac353205: Gained carrier Sep 9 23:52:28.949102 containerd[1537]: time="2025-09-09T23:52:28.949034186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2zkf,Uid:18a25db4-ea4e-4b4f-a6d8-c7abbf49ccc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd\"" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.600 [INFO][4330] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.633 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--46xdr-eth0 goldmane-54d579b49d- calico-system 1cfc1f33-b322-43b3-8dca-b4d7f224158e 842 0 2025-09-09 23:52:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-46xdr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9b6ac353205 [] [] }} ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.633 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.695 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" HandleID="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Workload="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.695 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" HandleID="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Workload="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000111450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-46xdr", "timestamp":"2025-09-09 23:52:28.69576859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.696 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.823 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.823 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.886 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.899 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.905 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.907 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.910 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.910 [INFO][4385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.911 [INFO][4385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7 Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.915 [INFO][4385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.923 [INFO][4385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.924 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" host="localhost" Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.924 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:28.950268 containerd[1537]: 2025-09-09 23:52:28.924 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" HandleID="k8s-pod-network.f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Workload="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.927 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--46xdr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1cfc1f33-b322-43b3-8dca-b4d7f224158e", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-46xdr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b6ac353205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.927 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.927 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b6ac353205 ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.930 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.931 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--46xdr-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"1cfc1f33-b322-43b3-8dca-b4d7f224158e", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7", Pod:"goldmane-54d579b49d-46xdr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9b6ac353205", MAC:"62:0f:d7:8a:45:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:28.950781 containerd[1537]: 2025-09-09 23:52:28.943 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" Namespace="calico-system" Pod="goldmane-54d579b49d-46xdr" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--46xdr-eth0" Sep 9 23:52:28.954578 containerd[1537]: time="2025-09-09T23:52:28.954476852Z" level=info msg="CreateContainer within sandbox \"b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:52:28.969734 containerd[1537]: time="2025-09-09T23:52:28.969690865Z" level=info msg="Container ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:28.973824 containerd[1537]: time="2025-09-09T23:52:28.973720044Z" level=info msg="connecting to shim f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7" address="unix:///run/containerd/s/0ffc003f6ad4403cf241e0a7bff6c2967b24304b8dafe23653c1a9f9569f270d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:28.977258 containerd[1537]: time="2025-09-09T23:52:28.977211553Z" level=info msg="CreateContainer within sandbox \"b3e8b8a1821cb4930737ce56662c417add6e11a082a4cc4f1df4b65e016bb1cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374\"" Sep 9 23:52:28.977827 containerd[1537]: time="2025-09-09T23:52:28.977808755Z" level=info msg="StartContainer for \"ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374\"" Sep 9 23:52:28.978730 containerd[1537]: time="2025-09-09T23:52:28.978705937Z" level=info msg="connecting to shim ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374" address="unix:///run/containerd/s/45366e439ecaa9eed2a292c657923bb8731e04a81850275334410db38a4a484b" protocol=ttrpc version=3 Sep 9 23:52:28.997468 systemd[1]: Started cri-containerd-ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374.scope - libcontainer container ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374. Sep 9 23:52:29.000375 systemd[1]: Started cri-containerd-f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7.scope - libcontainer container f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7. Sep 9 23:52:29.016815 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:29.032945 containerd[1537]: time="2025-09-09T23:52:29.032903595Z" level=info msg="StartContainer for \"ecf8f7b1db08d07cbf109e235246aea0f94aafe24084b6458dcdfc462dd14374\" returns successfully" Sep 9 23:52:29.052159 containerd[1537]: time="2025-09-09T23:52:29.052114004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-46xdr,Uid:1cfc1f33-b322-43b3-8dca-b4d7f224158e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7\"" Sep 9 23:52:29.085212 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:38080.service - OpenSSH per-connection server daemon (10.0.0.1:38080). Sep 9 23:52:29.144448 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 38080 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:29.146183 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:29.155031 systemd-logind[1520]: New session 8 of user core. Sep 9 23:52:29.159452 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:52:29.345347 sshd[4616]: Connection closed by 10.0.0.1 port 38080 Sep 9 23:52:29.345409 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:29.352495 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:38080.service: Deactivated successfully. Sep 9 23:52:29.354361 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:52:29.356417 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:52:29.359038 systemd-logind[1520]: Removed session 8. Sep 9 23:52:29.510467 containerd[1537]: time="2025-09-09T23:52:29.510415311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-kzrk6,Uid:9079e24f-6260-49d1-81ae-76ffac9f3325,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:29.643348 systemd-networkd[1436]: cali3dded7daa77: Link UP Sep 9 23:52:29.643540 systemd-networkd[1436]: cali3dded7daa77: Gained carrier Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.536 [INFO][4639] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.557 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0 calico-apiserver-5f79848fcd- calico-apiserver 9079e24f-6260-49d1-81ae-76ffac9f3325 839 0 2025-09-09 23:52:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f79848fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f79848fcd-kzrk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3dded7daa77 [] [] }} ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.557 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.585 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.585 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f79848fcd-kzrk6", "timestamp":"2025-09-09 23:52:29.585745646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.586 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.586 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.586 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.601 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.607 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.618 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.620 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.622 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.622 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.624 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4 Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.628 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.635 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.636 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" host="localhost" Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.636 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:29.663545 containerd[1537]: 2025-09-09 23:52:29.636 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.639 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0", GenerateName:"calico-apiserver-5f79848fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9079e24f-6260-49d1-81ae-76ffac9f3325", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f79848fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f79848fcd-kzrk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3dded7daa77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.639 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.639 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dded7daa77 ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.643 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.643 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0", GenerateName:"calico-apiserver-5f79848fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9079e24f-6260-49d1-81ae-76ffac9f3325", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f79848fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4", Pod:"calico-apiserver-5f79848fcd-kzrk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3dded7daa77", MAC:"2a:bd:29:cd:f6:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:29.664099 containerd[1537]: 2025-09-09 23:52:29.660 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-kzrk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:29.707540 containerd[1537]: time="2025-09-09T23:52:29.707489984Z" level=info msg="connecting to shim 7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" address="unix:///run/containerd/s/9f3c9c3d0e3ac30301d799fb7e8ad34f79497f508143bbf9ef4d2ca407190f2e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:29.716517 kubelet[2670]: I0909 23:52:29.716436 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d2zkf" podStartSLOduration=35.716416554 podStartE2EDuration="35.716416554s" podCreationTimestamp="2025-09-09 23:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:29.714383831 +0000 UTC m=+42.295136912" watchObservedRunningTime="2025-09-09 23:52:29.716416554 +0000 UTC m=+42.297169595" Sep 9 23:52:29.749765 systemd[1]: Started cri-containerd-7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4.scope - libcontainer container 7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4. Sep 9 23:52:29.781807 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:29.844080 containerd[1537]: time="2025-09-09T23:52:29.843982126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-kzrk6,Uid:9079e24f-6260-49d1-81ae-76ffac9f3325,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\"" Sep 9 23:52:30.339773 systemd-networkd[1436]: calie74aa68e5a9: Gained IPv6LL Sep 9 23:52:30.391598 containerd[1537]: time="2025-09-09T23:52:30.391500019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:30.393082 containerd[1537]: time="2025-09-09T23:52:30.392862283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 23:52:30.394973 containerd[1537]: time="2025-09-09T23:52:30.394769652Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:30.398179 containerd[1537]: time="2025-09-09T23:52:30.397416404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:30.398179 containerd[1537]: time="2025-09-09T23:52:30.398048247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.554467694s" Sep 9 23:52:30.398179 containerd[1537]: time="2025-09-09T23:52:30.398076252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 23:52:30.399851 containerd[1537]: time="2025-09-09T23:52:30.399820990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 23:52:30.408750 containerd[1537]: time="2025-09-09T23:52:30.408690787Z" level=info msg="CreateContainer within sandbox \"dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 23:52:30.420585 containerd[1537]: time="2025-09-09T23:52:30.420536160Z" level=info msg="Container 9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:30.427488 containerd[1537]: time="2025-09-09T23:52:30.427444138Z" level=info msg="CreateContainer within sandbox \"dc52731cae0e5b88f48795a729d7da4d45ee206b1485a1763a8f1983801c3574\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29\"" Sep 9 23:52:30.427938 containerd[1537]: time="2025-09-09T23:52:30.427906547Z" level=info msg="StartContainer for \"9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29\"" Sep 9 23:52:30.428955 containerd[1537]: time="2025-09-09T23:52:30.428907821Z" level=info msg="connecting to shim 9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29" address="unix:///run/containerd/s/a4724b50e5f2fc3a17481f905c8ecb0ede75287df9a4cf6f2d3217491093ebc5" protocol=ttrpc version=3 Sep 9 23:52:30.460511 systemd[1]: Started cri-containerd-9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29.scope - libcontainer container 9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29. Sep 9 23:52:30.505510 containerd[1537]: time="2025-09-09T23:52:30.505470483Z" level=info msg="StartContainer for \"9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29\" returns successfully" Sep 9 23:52:30.509455 containerd[1537]: time="2025-09-09T23:52:30.509415247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f888f5b9b-58hv6,Uid:d46cb187-f162-4110-a2ea-d6d4a6effa16,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:30.636709 systemd-networkd[1436]: cali6cf8e7e92f3: Link UP Sep 9 23:52:30.637367 systemd-networkd[1436]: cali6cf8e7e92f3: Gained carrier Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.529 [INFO][4784] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.542 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0 calico-apiserver-6f888f5b9b- calico-apiserver d46cb187-f162-4110-a2ea-d6d4a6effa16 845 0 2025-09-09 23:52:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f888f5b9b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f888f5b9b-58hv6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cf8e7e92f3 [] [] }} ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.542 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.574 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" HandleID="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Workload="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.574 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" HandleID="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Workload="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f888f5b9b-58hv6", "timestamp":"2025-09-09 23:52:30.574521171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.574 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.574 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.574 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.585 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.590 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.597 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.600 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.605 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.605 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.612 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5 Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.619 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.628 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.628 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" host="localhost" Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.628 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:30.649338 containerd[1537]: 2025-09-09 23:52:30.628 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" HandleID="k8s-pod-network.5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Workload="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.633 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0", GenerateName:"calico-apiserver-6f888f5b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d46cb187-f162-4110-a2ea-d6d4a6effa16", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f888f5b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f888f5b9b-58hv6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf8e7e92f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.633 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.633 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cf8e7e92f3 ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.638 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.638 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0", GenerateName:"calico-apiserver-6f888f5b9b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d46cb187-f162-4110-a2ea-d6d4a6effa16", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f888f5b9b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5", Pod:"calico-apiserver-6f888f5b9b-58hv6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf8e7e92f3", MAC:"82:30:9c:01:7d:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:30.650238 containerd[1537]: 2025-09-09 23:52:30.646 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" Namespace="calico-apiserver" Pod="calico-apiserver-6f888f5b9b-58hv6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f888f5b9b--58hv6-eth0" Sep 9 23:52:30.666843 containerd[1537]: time="2025-09-09T23:52:30.666791715Z" level=info msg="connecting to shim 5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5" address="unix:///run/containerd/s/a522b0b5ae80797eeaa7866a6a8b9a760aaa99ec240305346ba15b6fd9621ac3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:30.696493 systemd[1]: Started cri-containerd-5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5.scope - libcontainer container 5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5. Sep 9 23:52:30.718975 kubelet[2670]: I0909 23:52:30.718666 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-669764c947-zn8lm" podStartSLOduration=21.16245196 podStartE2EDuration="22.718647074s" podCreationTimestamp="2025-09-09 23:52:08 +0000 UTC" firstStartedPulling="2025-09-09 23:52:28.842952465 +0000 UTC m=+41.423705506" lastFinishedPulling="2025-09-09 23:52:30.399147579 +0000 UTC m=+42.979900620" observedRunningTime="2025-09-09 23:52:30.718440314 +0000 UTC m=+43.299193355" watchObservedRunningTime="2025-09-09 23:52:30.718647074 +0000 UTC m=+43.299400075" Sep 9 23:52:30.719167 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:30.747483 containerd[1537]: time="2025-09-09T23:52:30.747435527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f888f5b9b-58hv6,Uid:d46cb187-f162-4110-a2ea-d6d4a6effa16,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5\"" Sep 9 23:52:30.759206 containerd[1537]: time="2025-09-09T23:52:30.759146114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29\" id:\"ab654e9f185c25b808aa0a1230b0755fb1fccce7ec7243c0ceb10c87ca060b55\" pid:4875 exited_at:{seconds:1757461950 nanos:757639062}" Sep 9 23:52:30.786568 systemd-networkd[1436]: calic441468f66b: Gained IPv6LL Sep 9 23:52:30.915454 systemd-networkd[1436]: cali9b6ac353205: Gained IPv6LL Sep 9 23:52:31.106408 systemd-networkd[1436]: cali3dded7daa77: Gained IPv6LL Sep 9 23:52:31.383598 kubelet[2670]: I0909 23:52:31.383548 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:31.509624 containerd[1537]: time="2025-09-09T23:52:31.509575693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-tbt2p,Uid:3f44a001-e225-4e57-afa7-a134d2ce290b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:31.509817 containerd[1537]: time="2025-09-09T23:52:31.509796135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nl69b,Uid:ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:31.666530 systemd-networkd[1436]: cali49401b50c34: Link UP Sep 9 23:52:31.668020 systemd-networkd[1436]: cali49401b50c34: Gained carrier Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.550 [INFO][4918] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.576 [INFO][4918] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0 calico-apiserver-5f79848fcd- calico-apiserver 3f44a001-e225-4e57-afa7-a134d2ce290b 836 0 2025-09-09 23:52:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f79848fcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f79848fcd-tbt2p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali49401b50c34 [] [] }} ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.577 [INFO][4918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.615 [INFO][4951] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" HandleID="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Workload="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.615 [INFO][4951] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" HandleID="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Workload="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f79848fcd-tbt2p", "timestamp":"2025-09-09 23:52:31.615606996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.616 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.616 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.616 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.629 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.634 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.641 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.644 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.646 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.646 [INFO][4951] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.648 [INFO][4951] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.652 [INFO][4951] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.659 [INFO][4951] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.659 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" host="localhost" Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.659 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:31.687540 containerd[1537]: 2025-09-09 23:52:31.659 [INFO][4951] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" HandleID="k8s-pod-network.6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Workload="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.664 [INFO][4918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0", GenerateName:"calico-apiserver-5f79848fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f44a001-e225-4e57-afa7-a134d2ce290b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f79848fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f79848fcd-tbt2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49401b50c34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.664 [INFO][4918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.664 [INFO][4918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49401b50c34 ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.668 [INFO][4918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.670 [INFO][4918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0", GenerateName:"calico-apiserver-5f79848fcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f44a001-e225-4e57-afa7-a134d2ce290b", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f79848fcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e", Pod:"calico-apiserver-5f79848fcd-tbt2p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49401b50c34", MAC:"7e:ca:bc:e2:06:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:31.688485 containerd[1537]: 2025-09-09 23:52:31.685 [INFO][4918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" Namespace="calico-apiserver" Pod="calico-apiserver-5f79848fcd-tbt2p" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--tbt2p-eth0" Sep 9 23:52:31.778085 containerd[1537]: time="2025-09-09T23:52:31.778025009Z" level=info msg="connecting to shim 6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e" address="unix:///run/containerd/s/89b7898c06d69de71cf25cee218054e9d42cb980fc5f0f51d1d74c3539522f21" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:31.795241 systemd-networkd[1436]: calida24006c747: Link UP Sep 9 23:52:31.795675 systemd-networkd[1436]: calida24006c747: Gained carrier Sep 9 23:52:31.812759 systemd[1]: Started cri-containerd-6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e.scope - libcontainer container 6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e. Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.571 [INFO][4924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.594 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nl69b-eth0 coredns-668d6bf9bc- kube-system ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4 834 0 2025-09-09 23:51:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nl69b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida24006c747 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.594 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.627 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" HandleID="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Workload="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.628 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" HandleID="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Workload="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b280), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nl69b", "timestamp":"2025-09-09 23:52:31.627813226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.628 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.660 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.660 [INFO][4958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.730 [INFO][4958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.744 [INFO][4958] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.754 [INFO][4958] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.759 [INFO][4958] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.763 [INFO][4958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.763 [INFO][4958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.765 [INFO][4958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.771 [INFO][4958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.788 [INFO][4958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.788 [INFO][4958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" host="localhost" Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.788 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:31.817464 containerd[1537]: 2025-09-09 23:52:31.788 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" HandleID="k8s-pod-network.00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Workload="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.792 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nl69b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nl69b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida24006c747", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.793 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.793 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida24006c747 ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.796 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.797 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nl69b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 51, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e", Pod:"coredns-668d6bf9bc-nl69b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida24006c747", MAC:"46:09:05:05:c1:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:31.818072 containerd[1537]: 2025-09-09 23:52:31.807 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" Namespace="kube-system" Pod="coredns-668d6bf9bc-nl69b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nl69b-eth0" Sep 9 23:52:31.830648 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:31.859382 containerd[1537]: time="2025-09-09T23:52:31.859338154Z" level=info msg="connecting to shim 00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e" address="unix:///run/containerd/s/55e9c3cdefd4d7886c64413229885baae4447bc723b5bfcf87c62ee8db992f0f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:31.862424 containerd[1537]: time="2025-09-09T23:52:31.862387131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f79848fcd-tbt2p,Uid:3f44a001-e225-4e57-afa7-a134d2ce290b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e\"" Sep 9 23:52:31.891551 systemd[1]: Started cri-containerd-00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e.scope - libcontainer container 00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e. Sep 9 23:52:31.905212 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:31.937045 containerd[1537]: time="2025-09-09T23:52:31.936995849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nl69b,Uid:ee1e6ae6-30e0-4905-8cb1-5c4e99d07cc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e\"" Sep 9 23:52:31.940709 containerd[1537]: time="2025-09-09T23:52:31.940649620Z" level=info msg="CreateContainer within sandbox \"00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:52:31.962945 containerd[1537]: time="2025-09-09T23:52:31.962750522Z" level=info msg="Container 07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:31.984813 containerd[1537]: time="2025-09-09T23:52:31.984740763Z" level=info msg="CreateContainer within sandbox \"00ed4818d6a3f3abeaa6b6acffcf18988318975af16eec9f194b23b40364ed9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc\"" Sep 9 23:52:31.986058 containerd[1537]: time="2025-09-09T23:52:31.985997320Z" level=info msg="StartContainer for \"07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc\"" Sep 9 23:52:31.987852 containerd[1537]: time="2025-09-09T23:52:31.987819585Z" level=info msg="connecting to shim 07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc" address="unix:///run/containerd/s/55e9c3cdefd4d7886c64413229885baae4447bc723b5bfcf87c62ee8db992f0f" protocol=ttrpc version=3 Sep 9 23:52:32.013567 systemd[1]: Started cri-containerd-07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc.scope - libcontainer container 07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc. Sep 9 23:52:32.069261 containerd[1537]: time="2025-09-09T23:52:32.069206942Z" level=info msg="StartContainer for \"07ccc28ab6cbbe6ca108d02df733e7960546ea889ac1671eed8acba9ac0edabc\" returns successfully" Sep 9 23:52:32.384051 systemd-networkd[1436]: vxlan.calico: Link UP Sep 9 23:52:32.384056 systemd-networkd[1436]: vxlan.calico: Gained carrier Sep 9 23:52:32.390393 systemd-networkd[1436]: cali6cf8e7e92f3: Gained IPv6LL Sep 9 23:52:32.510772 containerd[1537]: time="2025-09-09T23:52:32.510661261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgp8h,Uid:acbcd775-647a-401e-8ee5-46577cb61830,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:32.549783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986458955.mount: Deactivated successfully. Sep 9 23:52:32.702300 containerd[1537]: time="2025-09-09T23:52:32.702246645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:32.705355 containerd[1537]: time="2025-09-09T23:52:32.703922515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 23:52:32.705355 containerd[1537]: time="2025-09-09T23:52:32.704785155Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:32.710768 containerd[1537]: time="2025-09-09T23:52:32.710724815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:32.712250 containerd[1537]: time="2025-09-09T23:52:32.711481755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.311623398s" Sep 9 23:52:32.712250 containerd[1537]: time="2025-09-09T23:52:32.711518962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 23:52:32.713259 containerd[1537]: time="2025-09-09T23:52:32.713228478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:52:32.716318 containerd[1537]: time="2025-09-09T23:52:32.716246197Z" level=info msg="CreateContainer within sandbox \"f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 23:52:32.744309 kubelet[2670]: I0909 23:52:32.743715 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nl69b" podStartSLOduration=38.743692797 podStartE2EDuration="38.743692797s" podCreationTimestamp="2025-09-09 23:51:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:32.741921189 +0000 UTC m=+45.322674230" watchObservedRunningTime="2025-09-09 23:52:32.743692797 +0000 UTC m=+45.324446118" Sep 9 23:52:32.750313 containerd[1537]: time="2025-09-09T23:52:32.749303036Z" level=info msg="Container 214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:32.785549 containerd[1537]: time="2025-09-09T23:52:32.785256491Z" level=info msg="CreateContainer within sandbox \"f9535c3710640c418fb883e9317edcb77be014fda923a82a679a4729610c1fa7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c\"" Sep 9 23:52:32.793774 containerd[1537]: time="2025-09-09T23:52:32.792067632Z" level=info msg="StartContainer for \"214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c\"" Sep 9 23:52:32.798904 containerd[1537]: time="2025-09-09T23:52:32.797923476Z" level=info msg="connecting to shim 214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c" address="unix:///run/containerd/s/0ffc003f6ad4403cf241e0a7bff6c2967b24304b8dafe23653c1a9f9569f270d" protocol=ttrpc version=3 Sep 9 23:52:32.835311 systemd-networkd[1436]: cali49401b50c34: Gained IPv6LL Sep 9 23:52:32.839791 systemd[1]: Started cri-containerd-214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c.scope - libcontainer container 214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c. Sep 9 23:52:32.883043 systemd-networkd[1436]: cali24d8b81759e: Link UP Sep 9 23:52:32.883821 systemd-networkd[1436]: cali24d8b81759e: Gained carrier Sep 9 23:52:32.896109 containerd[1537]: time="2025-09-09T23:52:32.896053161Z" level=info msg="StartContainer for \"214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c\" returns successfully" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.726 [INFO][5207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fgp8h-eth0 csi-node-driver- calico-system acbcd775-647a-401e-8ee5-46577cb61830 712 0 2025-09-09 23:52:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fgp8h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali24d8b81759e [] [] }} ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.729 [INFO][5207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.824 [INFO][5242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" HandleID="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Workload="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.824 [INFO][5242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" HandleID="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Workload="localhost-k8s-csi--node--driver--fgp8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fgp8h", "timestamp":"2025-09-09 23:52:32.823261126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.824 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.824 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.824 [INFO][5242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.840 [INFO][5242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.849 [INFO][5242] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.854 [INFO][5242] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.856 [INFO][5242] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.859 [INFO][5242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.859 [INFO][5242] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.861 [INFO][5242] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.866 [INFO][5242] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.874 [INFO][5242] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.874 [INFO][5242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" host="localhost" Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.874 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:32.914300 containerd[1537]: 2025-09-09 23:52:32.874 [INFO][5242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" HandleID="k8s-pod-network.df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Workload="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.878 [INFO][5207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgp8h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acbcd775-647a-401e-8ee5-46577cb61830", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fgp8h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24d8b81759e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.878 [INFO][5207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.878 [INFO][5207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24d8b81759e ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.883 [INFO][5207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.884 [INFO][5207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fgp8h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acbcd775-647a-401e-8ee5-46577cb61830", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca", Pod:"csi-node-driver-fgp8h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24d8b81759e", MAC:"92:5b:50:0a:d2:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:52:32.916077 containerd[1537]: 2025-09-09 23:52:32.910 [INFO][5207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" Namespace="calico-system" Pod="csi-node-driver-fgp8h" WorkloadEndpoint="localhost-k8s-csi--node--driver--fgp8h-eth0" Sep 9 23:52:32.960184 containerd[1537]: time="2025-09-09T23:52:32.959753393Z" level=info msg="connecting to shim df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca" address="unix:///run/containerd/s/05e8fcbb1c589f72b853548d5f2e82347c94a13aba7fd5212a9bb8919785670d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:33.000548 systemd[1]: Started cri-containerd-df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca.scope - libcontainer container df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca. Sep 9 23:52:33.012040 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:52:33.025784 containerd[1537]: time="2025-09-09T23:52:33.025745675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fgp8h,Uid:acbcd775-647a-401e-8ee5-46577cb61830,Namespace:calico-system,Attempt:0,} returns sandbox id \"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca\"" Sep 9 23:52:33.666408 systemd-networkd[1436]: calida24006c747: Gained IPv6LL Sep 9 23:52:33.773090 kubelet[2670]: I0909 23:52:33.773032 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-46xdr" podStartSLOduration=22.114237273 podStartE2EDuration="25.773015206s" podCreationTimestamp="2025-09-09 23:52:08 +0000 UTC" firstStartedPulling="2025-09-09 23:52:29.054298077 +0000 UTC m=+41.635051118" lastFinishedPulling="2025-09-09 23:52:32.71307601 +0000 UTC m=+45.293829051" observedRunningTime="2025-09-09 23:52:33.771683605 +0000 UTC m=+46.352436646" watchObservedRunningTime="2025-09-09 23:52:33.773015206 +0000 UTC m=+46.353768207" Sep 9 23:52:33.922444 systemd-networkd[1436]: cali24d8b81759e: Gained IPv6LL Sep 9 23:52:34.114436 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Sep 9 23:52:34.194655 containerd[1537]: time="2025-09-09T23:52:34.194593045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:34.195159 containerd[1537]: time="2025-09-09T23:52:34.195125420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 23:52:34.196265 containerd[1537]: time="2025-09-09T23:52:34.196218334Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:34.200005 containerd[1537]: time="2025-09-09T23:52:34.199961199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:34.200712 containerd[1537]: time="2025-09-09T23:52:34.200682887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.487340908s" Sep 9 23:52:34.200777 containerd[1537]: time="2025-09-09T23:52:34.200719253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:52:34.203160 containerd[1537]: time="2025-09-09T23:52:34.203106398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:52:34.204250 containerd[1537]: time="2025-09-09T23:52:34.204218195Z" level=info msg="CreateContainer within sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:52:34.229858 containerd[1537]: time="2025-09-09T23:52:34.229804581Z" level=info msg="Container d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:34.240265 containerd[1537]: time="2025-09-09T23:52:34.240219191Z" level=info msg="CreateContainer within sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\"" Sep 9 23:52:34.241302 containerd[1537]: time="2025-09-09T23:52:34.241247093Z" level=info msg="StartContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\"" Sep 9 23:52:34.242605 containerd[1537]: time="2025-09-09T23:52:34.242568648Z" level=info msg="connecting to shim d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7" address="unix:///run/containerd/s/9f3c9c3d0e3ac30301d799fb7e8ad34f79497f508143bbf9ef4d2ca407190f2e" protocol=ttrpc version=3 Sep 9 23:52:34.271807 systemd[1]: Started cri-containerd-d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7.scope - libcontainer container d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7. Sep 9 23:52:34.310986 containerd[1537]: time="2025-09-09T23:52:34.310908869Z" level=info msg="StartContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" returns successfully" Sep 9 23:52:34.360565 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:47464.service - OpenSSH per-connection server daemon (10.0.0.1:47464). Sep 9 23:52:34.438292 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 47464 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:34.439689 containerd[1537]: time="2025-09-09T23:52:34.439643699Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:34.440813 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:34.442032 containerd[1537]: time="2025-09-09T23:52:34.441987796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 23:52:34.445426 containerd[1537]: time="2025-09-09T23:52:34.445319708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 242.172943ms" Sep 9 23:52:34.445426 containerd[1537]: time="2025-09-09T23:52:34.445371717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:52:34.448521 containerd[1537]: time="2025-09-09T23:52:34.446708634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:52:34.450938 containerd[1537]: time="2025-09-09T23:52:34.448897903Z" level=info msg="CreateContainer within sandbox \"5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:52:34.451096 systemd-logind[1520]: New session 9 of user core. Sep 9 23:52:34.456710 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:52:34.478300 containerd[1537]: time="2025-09-09T23:52:34.477874091Z" level=info msg="Container 5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:34.488746 containerd[1537]: time="2025-09-09T23:52:34.488700534Z" level=info msg="CreateContainer within sandbox \"5e27ea8af8567840022ff9734e9876f27bbd58fafd99b47f65d67f00599382e5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad\"" Sep 9 23:52:34.489215 containerd[1537]: time="2025-09-09T23:52:34.489187661Z" level=info msg="StartContainer for \"5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad\"" Sep 9 23:52:34.490797 containerd[1537]: time="2025-09-09T23:52:34.490761741Z" level=info msg="connecting to shim 5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad" address="unix:///run/containerd/s/a522b0b5ae80797eeaa7866a6a8b9a760aaa99ec240305346ba15b6fd9621ac3" protocol=ttrpc version=3 Sep 9 23:52:34.525502 systemd[1]: Started cri-containerd-5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad.scope - libcontainer container 5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad. Sep 9 23:52:34.618615 containerd[1537]: time="2025-09-09T23:52:34.618572287Z" level=info msg="StartContainer for \"5b7554e2c36ae8e17efc3eaf95f7c2386880f512a912d210d2c92a0ce4fd46ad\" returns successfully" Sep 9 23:52:34.705369 containerd[1537]: time="2025-09-09T23:52:34.705240884Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:34.706090 containerd[1537]: time="2025-09-09T23:52:34.706045347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 23:52:34.709100 containerd[1537]: time="2025-09-09T23:52:34.709058202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 262.284916ms" Sep 9 23:52:34.709160 containerd[1537]: time="2025-09-09T23:52:34.709106810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:52:34.711583 containerd[1537]: time="2025-09-09T23:52:34.711463669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 23:52:34.713040 containerd[1537]: time="2025-09-09T23:52:34.713001942Z" level=info msg="CreateContainer within sandbox \"6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:52:34.731061 containerd[1537]: time="2025-09-09T23:52:34.731003180Z" level=info msg="Container 401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:34.735651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576147028.mount: Deactivated successfully. Sep 9 23:52:34.744197 kubelet[2670]: I0909 23:52:34.744150 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:34.757379 containerd[1537]: time="2025-09-09T23:52:34.755141629Z" level=info msg="CreateContainer within sandbox \"6fcb98a24ec97a1f5be4318072ee1cdf68624b201dabd461d437a44e69fcb37e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311\"" Sep 9 23:52:34.757379 containerd[1537]: time="2025-09-09T23:52:34.757171349Z" level=info msg="StartContainer for \"401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311\"" Sep 9 23:52:34.758684 containerd[1537]: time="2025-09-09T23:52:34.758647171Z" level=info msg="connecting to shim 401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311" address="unix:///run/containerd/s/89b7898c06d69de71cf25cee218054e9d42cb980fc5f0f51d1d74c3539522f21" protocol=ttrpc version=3 Sep 9 23:52:34.761970 kubelet[2670]: I0909 23:52:34.761707 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f888f5b9b-58hv6" podStartSLOduration=26.06473501 podStartE2EDuration="29.761686551s" podCreationTimestamp="2025-09-09 23:52:05 +0000 UTC" firstStartedPulling="2025-09-09 23:52:30.74957022 +0000 UTC m=+43.330323261" lastFinishedPulling="2025-09-09 23:52:34.446521761 +0000 UTC m=+47.027274802" observedRunningTime="2025-09-09 23:52:34.7595967 +0000 UTC m=+47.340349741" watchObservedRunningTime="2025-09-09 23:52:34.761686551 +0000 UTC m=+47.342439552" Sep 9 23:52:34.808665 systemd[1]: Started cri-containerd-401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311.scope - libcontainer container 401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311. Sep 9 23:52:34.815243 sshd[5432]: Connection closed by 10.0.0.1 port 47464 Sep 9 23:52:34.815762 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:34.820477 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:47464.service: Deactivated successfully. Sep 9 23:52:34.823721 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:52:34.827579 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:52:34.831992 systemd-logind[1520]: Removed session 9. Sep 9 23:52:34.899790 containerd[1537]: time="2025-09-09T23:52:34.899737597Z" level=info msg="StartContainer for \"401743cac9bcbc1bf053782ffa69ea6aa9730e24dac43f46bbc9038025ad1311\" returns successfully" Sep 9 23:52:35.766985 kubelet[2670]: I0909 23:52:35.766952 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:35.788106 kubelet[2670]: I0909 23:52:35.788041 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f79848fcd-kzrk6" podStartSLOduration=27.431292683 podStartE2EDuration="31.788021902s" podCreationTimestamp="2025-09-09 23:52:04 +0000 UTC" firstStartedPulling="2025-09-09 23:52:29.845270142 +0000 UTC m=+42.426023183" lastFinishedPulling="2025-09-09 23:52:34.201999321 +0000 UTC m=+46.782752402" observedRunningTime="2025-09-09 23:52:34.782350462 +0000 UTC m=+47.363103503" watchObservedRunningTime="2025-09-09 23:52:35.788021902 +0000 UTC m=+48.368774943" Sep 9 23:52:35.788310 kubelet[2670]: I0909 23:52:35.788143 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f79848fcd-tbt2p" podStartSLOduration=28.944931127 podStartE2EDuration="31.788139483s" podCreationTimestamp="2025-09-09 23:52:04 +0000 UTC" firstStartedPulling="2025-09-09 23:52:31.867527224 +0000 UTC m=+44.448280225" lastFinishedPulling="2025-09-09 23:52:34.71073554 +0000 UTC m=+47.291488581" observedRunningTime="2025-09-09 23:52:35.787874596 +0000 UTC m=+48.368627637" watchObservedRunningTime="2025-09-09 23:52:35.788139483 +0000 UTC m=+48.368892524" Sep 9 23:52:35.835038 containerd[1537]: time="2025-09-09T23:52:35.833910379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:35.835440 containerd[1537]: time="2025-09-09T23:52:35.835098706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 23:52:35.836462 containerd[1537]: time="2025-09-09T23:52:35.836417256Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:35.840453 containerd[1537]: time="2025-09-09T23:52:35.840390428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:35.844608 containerd[1537]: time="2025-09-09T23:52:35.844532950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.132961502s" Sep 9 23:52:35.844608 containerd[1537]: time="2025-09-09T23:52:35.844601642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 23:52:35.851070 containerd[1537]: time="2025-09-09T23:52:35.851014320Z" level=info msg="CreateContainer within sandbox \"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 23:52:35.879411 containerd[1537]: time="2025-09-09T23:52:35.878203698Z" level=info msg="Container f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:35.893484 containerd[1537]: time="2025-09-09T23:52:35.893399186Z" level=info msg="CreateContainer within sandbox \"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094\"" Sep 9 23:52:35.895489 containerd[1537]: time="2025-09-09T23:52:35.895436181Z" level=info msg="StartContainer for \"f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094\"" Sep 9 23:52:35.898953 containerd[1537]: time="2025-09-09T23:52:35.898903065Z" level=info msg="connecting to shim f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094" address="unix:///run/containerd/s/05e8fcbb1c589f72b853548d5f2e82347c94a13aba7fd5212a9bb8919785670d" protocol=ttrpc version=3 Sep 9 23:52:35.934583 systemd[1]: Started cri-containerd-f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094.scope - libcontainer container f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094. Sep 9 23:52:36.006591 containerd[1537]: time="2025-09-09T23:52:36.006551289Z" level=info msg="StartContainer for \"f4a7f6b5a5a64f5c6f1d0c97e96f4907a8d3f03c856162ac1db737f771c2e094\" returns successfully" Sep 9 23:52:36.009115 containerd[1537]: time="2025-09-09T23:52:36.009062678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 23:52:36.770651 kubelet[2670]: I0909 23:52:36.770565 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:36.770651 kubelet[2670]: I0909 23:52:36.770590 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:36.775908 containerd[1537]: time="2025-09-09T23:52:36.775803587Z" level=info msg="StopContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" with timeout 30 (s)" Sep 9 23:52:36.780720 containerd[1537]: time="2025-09-09T23:52:36.780676501Z" level=info msg="Stop container \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" with signal terminated" Sep 9 23:52:36.794565 systemd[1]: cri-containerd-d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7.scope: Deactivated successfully. Sep 9 23:52:36.803893 kubelet[2670]: I0909 23:52:36.803840 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:36.804380 containerd[1537]: time="2025-09-09T23:52:36.804342470Z" level=info msg="received exit event container_id:\"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" id:\"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" pid:5404 exit_status:1 exited_at:{seconds:1757461956 nanos:803664234}" Sep 9 23:52:36.804691 containerd[1537]: time="2025-09-09T23:52:36.804610636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" id:\"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" pid:5404 exit_status:1 exited_at:{seconds:1757461956 nanos:803664234}" Sep 9 23:52:36.838756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7-rootfs.mount: Deactivated successfully. Sep 9 23:52:36.880028 containerd[1537]: time="2025-09-09T23:52:36.879963649Z" level=info msg="StopContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" returns successfully" Sep 9 23:52:36.883329 containerd[1537]: time="2025-09-09T23:52:36.883110427Z" level=info msg="StopPodSandbox for \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\"" Sep 9 23:52:36.891622 containerd[1537]: time="2025-09-09T23:52:36.891565234Z" level=info msg="Container to stop \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 23:52:36.899881 systemd[1]: cri-containerd-7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4.scope: Deactivated successfully. Sep 9 23:52:36.902951 containerd[1537]: time="2025-09-09T23:52:36.902892572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" id:\"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" pid:4705 exit_status:137 exited_at:{seconds:1757461956 nanos:902492823}" Sep 9 23:52:36.952503 containerd[1537]: time="2025-09-09T23:52:36.952449211Z" level=info msg="shim disconnected" id=7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4 namespace=k8s.io Sep 9 23:52:36.952903 containerd[1537]: time="2025-09-09T23:52:36.952485577Z" level=warning msg="cleaning up after shim disconnected" id=7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4 namespace=k8s.io Sep 9 23:52:36.952903 containerd[1537]: time="2025-09-09T23:52:36.952525584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:52:36.952943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4-rootfs.mount: Deactivated successfully. Sep 9 23:52:37.025241 containerd[1537]: time="2025-09-09T23:52:37.022126868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c\" id:\"79f5f5ec84fcb38d70ecd65b573adec7699abaca8aaec94665c9e23eac52656c\" pid:5592 exit_status:1 exited_at:{seconds:1757461956 nanos:928954831}" Sep 9 23:52:37.024511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4-shm.mount: Deactivated successfully. Sep 9 23:52:37.029257 containerd[1537]: time="2025-09-09T23:52:37.029205458Z" level=info msg="received exit event sandbox_id:\"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" exit_status:137 exited_at:{seconds:1757461956 nanos:902492823}" Sep 9 23:52:37.033107 containerd[1537]: time="2025-09-09T23:52:37.032872114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"214ba980defa3582988901a3c37b9e32af8d8fe876649cf677598359da278f5c\" id:\"e89492ec3faa6fab659bcd014051896f3cf619c032f78878496bd733d605c853\" pid:5653 exit_status:1 exited_at:{seconds:1757461957 nanos:32545379}" Sep 9 23:52:37.092578 containerd[1537]: time="2025-09-09T23:52:37.092531704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.093673 containerd[1537]: time="2025-09-09T23:52:37.093630569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 23:52:37.095230 containerd[1537]: time="2025-09-09T23:52:37.094718872Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.099549 systemd-networkd[1436]: cali3dded7daa77: Link DOWN Sep 9 23:52:37.099557 systemd-networkd[1436]: cali3dded7daa77: Lost carrier Sep 9 23:52:37.101195 containerd[1537]: time="2025-09-09T23:52:37.100854464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.101195 containerd[1537]: time="2025-09-09T23:52:37.101167396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.092062551s" Sep 9 23:52:37.101195 containerd[1537]: time="2025-09-09T23:52:37.101198362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 23:52:37.109322 containerd[1537]: time="2025-09-09T23:52:37.109245955Z" level=info msg="CreateContainer within sandbox \"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 23:52:37.129449 containerd[1537]: time="2025-09-09T23:52:37.126526820Z" level=info msg="Container 90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:37.144036 containerd[1537]: time="2025-09-09T23:52:37.143969633Z" level=info msg="CreateContainer within sandbox \"df2e642700a0b836a9e4c34e78709ff0fb18789f7ac72be45c7524ec03319bca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0\"" Sep 9 23:52:37.144693 containerd[1537]: time="2025-09-09T23:52:37.144664309Z" level=info msg="StartContainer for \"90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0\"" Sep 9 23:52:37.148106 containerd[1537]: time="2025-09-09T23:52:37.148038077Z" level=info msg="connecting to shim 90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0" address="unix:///run/containerd/s/05e8fcbb1c589f72b853548d5f2e82347c94a13aba7fd5212a9bb8919785670d" protocol=ttrpc version=3 Sep 9 23:52:37.175860 systemd[1]: Started cri-containerd-90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0.scope - libcontainer container 90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0. Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.096 [INFO][5681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.096 [INFO][5681] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" iface="eth0" netns="/var/run/netns/cni-021bafc3-2a63-dea4-1568-cd24adbdab0e" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.097 [INFO][5681] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" iface="eth0" netns="/var/run/netns/cni-021bafc3-2a63-dea4-1568-cd24adbdab0e" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.115 [INFO][5681] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" after=18.980271ms iface="eth0" netns="/var/run/netns/cni-021bafc3-2a63-dea4-1568-cd24adbdab0e" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.115 [INFO][5681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.115 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.151 [INFO][5694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.151 [INFO][5694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.151 [INFO][5694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.224 [INFO][5694] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.224 [INFO][5694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.227 [INFO][5694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:37.237173 containerd[1537]: 2025-09-09 23:52:37.232 [INFO][5681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:37.237849 containerd[1537]: time="2025-09-09T23:52:37.237792447Z" level=info msg="TearDown network for sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" successfully" Sep 9 23:52:37.238617 containerd[1537]: time="2025-09-09T23:52:37.238405470Z" level=info msg="StopPodSandbox for \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" returns successfully" Sep 9 23:52:37.258568 containerd[1537]: time="2025-09-09T23:52:37.258520811Z" level=info msg="StartContainer for \"90d7a09e60516dfb00c01cdaa9ac0f1aa367685c006605d8fd2256ef1e0650f0\" returns successfully" Sep 9 23:52:37.359051 kubelet[2670]: I0909 23:52:37.358163 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9079e24f-6260-49d1-81ae-76ffac9f3325-calico-apiserver-certs\") pod \"9079e24f-6260-49d1-81ae-76ffac9f3325\" (UID: \"9079e24f-6260-49d1-81ae-76ffac9f3325\") " Sep 9 23:52:37.359051 kubelet[2670]: I0909 23:52:37.358253 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27q4k\" (UniqueName: \"kubernetes.io/projected/9079e24f-6260-49d1-81ae-76ffac9f3325-kube-api-access-27q4k\") pod \"9079e24f-6260-49d1-81ae-76ffac9f3325\" (UID: \"9079e24f-6260-49d1-81ae-76ffac9f3325\") " Sep 9 23:52:37.361229 kubelet[2670]: I0909 23:52:37.361181 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9079e24f-6260-49d1-81ae-76ffac9f3325-kube-api-access-27q4k" (OuterVolumeSpecName: "kube-api-access-27q4k") pod "9079e24f-6260-49d1-81ae-76ffac9f3325" (UID: "9079e24f-6260-49d1-81ae-76ffac9f3325"). InnerVolumeSpecName "kube-api-access-27q4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 23:52:37.361350 kubelet[2670]: I0909 23:52:37.361301 2670 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079e24f-6260-49d1-81ae-76ffac9f3325-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9079e24f-6260-49d1-81ae-76ffac9f3325" (UID: "9079e24f-6260-49d1-81ae-76ffac9f3325"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 23:52:37.459768 kubelet[2670]: I0909 23:52:37.459583 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27q4k\" (UniqueName: \"kubernetes.io/projected/9079e24f-6260-49d1-81ae-76ffac9f3325-kube-api-access-27q4k\") on node \"localhost\" DevicePath \"\"" Sep 9 23:52:37.459768 kubelet[2670]: I0909 23:52:37.459624 2670 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9079e24f-6260-49d1-81ae-76ffac9f3325-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Sep 9 23:52:37.516619 systemd[1]: Removed slice kubepods-besteffort-pod9079e24f_6260_49d1_81ae_76ffac9f3325.slice - libcontainer container kubepods-besteffort-pod9079e24f_6260_49d1_81ae_76ffac9f3325.slice. Sep 9 23:52:37.576045 kubelet[2670]: I0909 23:52:37.575974 2670 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 23:52:37.582038 kubelet[2670]: I0909 23:52:37.581972 2670 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 23:52:37.781170 kubelet[2670]: I0909 23:52:37.780886 2670 scope.go:117] "RemoveContainer" containerID="d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7" Sep 9 23:52:37.783754 containerd[1537]: time="2025-09-09T23:52:37.783653340Z" level=info msg="RemoveContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\"" Sep 9 23:52:37.789815 containerd[1537]: time="2025-09-09T23:52:37.789754205Z" level=info msg="RemoveContainer for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" returns successfully" Sep 9 23:52:37.790260 kubelet[2670]: I0909 23:52:37.790085 2670 scope.go:117] "RemoveContainer" containerID="d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7" Sep 9 23:52:37.790503 containerd[1537]: time="2025-09-09T23:52:37.790421918Z" level=error msg="ContainerStatus for \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\": not found" Sep 9 23:52:37.793777 kubelet[2670]: E0909 23:52:37.793723 2670 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\": not found" containerID="d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7" Sep 9 23:52:37.805313 kubelet[2670]: I0909 23:52:37.805141 2670 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7"} err="failed to get container status \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5fd0cc04197df3207782705eef617d9819158f07b2f44f333dfd77108015ba7\": not found" Sep 9 23:52:37.820530 kubelet[2670]: I0909 23:52:37.819824 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fgp8h" podStartSLOduration=25.741141329 podStartE2EDuration="29.819804298s" podCreationTimestamp="2025-09-09 23:52:08 +0000 UTC" firstStartedPulling="2025-09-09 23:52:33.027071275 +0000 UTC m=+45.607824316" lastFinishedPulling="2025-09-09 23:52:37.105734284 +0000 UTC m=+49.686487285" observedRunningTime="2025-09-09 23:52:37.819068054 +0000 UTC m=+50.399821095" watchObservedRunningTime="2025-09-09 23:52:37.819804298 +0000 UTC m=+50.400557299" Sep 9 23:52:37.952723 systemd[1]: run-netns-cni\x2d021bafc3\x2d2a63\x2ddea4\x2d1568\x2dcd24adbdab0e.mount: Deactivated successfully. Sep 9 23:52:37.952831 systemd[1]: var-lib-kubelet-pods-9079e24f\x2d6260\x2d49d1\x2d81ae\x2d76ffac9f3325-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27q4k.mount: Deactivated successfully. Sep 9 23:52:37.952908 systemd[1]: var-lib-kubelet-pods-9079e24f\x2d6260\x2d49d1\x2d81ae\x2d76ffac9f3325-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 9 23:52:39.520909 kubelet[2670]: I0909 23:52:39.520146 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9079e24f-6260-49d1-81ae-76ffac9f3325" path="/var/lib/kubelet/pods/9079e24f-6260-49d1-81ae-76ffac9f3325/volumes" Sep 9 23:52:39.832840 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). Sep 9 23:52:39.911823 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:39.913718 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:39.919264 systemd-logind[1520]: New session 10 of user core. Sep 9 23:52:39.932523 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:52:40.131130 sshd[5758]: Connection closed by 10.0.0.1 port 47474 Sep 9 23:52:40.132405 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:40.142848 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:47474.service: Deactivated successfully. Sep 9 23:52:40.146560 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:52:40.147394 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:52:40.151090 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:60096.service - OpenSSH per-connection server daemon (10.0.0.1:60096). Sep 9 23:52:40.151810 systemd-logind[1520]: Removed session 10. Sep 9 23:52:40.216059 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 60096 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:40.217563 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:40.222042 systemd-logind[1520]: New session 11 of user core. Sep 9 23:52:40.228494 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:52:40.453486 sshd[5775]: Connection closed by 10.0.0.1 port 60096 Sep 9 23:52:40.454070 sshd-session[5772]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:40.467565 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:60096.service: Deactivated successfully. Sep 9 23:52:40.471255 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:52:40.472848 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:52:40.475826 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:60106.service - OpenSSH per-connection server daemon (10.0.0.1:60106). Sep 9 23:52:40.479358 systemd-logind[1520]: Removed session 11. Sep 9 23:52:40.538997 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 60106 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:40.541268 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:40.549131 systemd-logind[1520]: New session 12 of user core. Sep 9 23:52:40.567525 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:52:40.732407 sshd[5790]: Connection closed by 10.0.0.1 port 60106 Sep 9 23:52:40.732676 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:40.736989 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:60106.service: Deactivated successfully. Sep 9 23:52:40.742790 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:52:40.746618 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:52:40.748672 systemd-logind[1520]: Removed session 12. Sep 9 23:52:45.761215 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:60108.service - OpenSSH per-connection server daemon (10.0.0.1:60108). Sep 9 23:52:45.825314 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 60108 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:45.826397 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:45.835677 systemd-logind[1520]: New session 13 of user core. Sep 9 23:52:45.839514 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:52:46.028913 sshd[5815]: Connection closed by 10.0.0.1 port 60108 Sep 9 23:52:46.029822 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:46.040498 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:60108.service: Deactivated successfully. Sep 9 23:52:46.043269 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:52:46.044144 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:52:46.048027 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:60114.service - OpenSSH per-connection server daemon (10.0.0.1:60114). Sep 9 23:52:46.049717 systemd-logind[1520]: Removed session 13. Sep 9 23:52:46.124354 sshd[5828]: Accepted publickey for core from 10.0.0.1 port 60114 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:46.126236 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:46.130827 systemd-logind[1520]: New session 14 of user core. Sep 9 23:52:46.143492 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:52:46.366422 sshd[5831]: Connection closed by 10.0.0.1 port 60114 Sep 9 23:52:46.367008 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:46.378725 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:60114.service: Deactivated successfully. Sep 9 23:52:46.381322 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:52:46.382002 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:52:46.384539 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:60122.service - OpenSSH per-connection server daemon (10.0.0.1:60122). Sep 9 23:52:46.385629 systemd-logind[1520]: Removed session 14. Sep 9 23:52:46.442866 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 60122 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:46.444223 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:46.448691 systemd-logind[1520]: New session 15 of user core. Sep 9 23:52:46.459481 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:52:47.033363 sshd[5845]: Connection closed by 10.0.0.1 port 60122 Sep 9 23:52:47.034356 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:47.043882 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:60122.service: Deactivated successfully. Sep 9 23:52:47.045939 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:52:47.048583 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:52:47.052848 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:60128.service - OpenSSH per-connection server daemon (10.0.0.1:60128). Sep 9 23:52:47.055097 systemd-logind[1520]: Removed session 15. Sep 9 23:52:47.112759 sshd[5865]: Accepted publickey for core from 10.0.0.1 port 60128 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:47.114577 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:47.119555 systemd-logind[1520]: New session 16 of user core. Sep 9 23:52:47.135512 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:52:47.438965 sshd[5868]: Connection closed by 10.0.0.1 port 60128 Sep 9 23:52:47.439841 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:47.447943 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:60128.service: Deactivated successfully. Sep 9 23:52:47.451624 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:52:47.452615 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:52:47.459823 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:60144.service - OpenSSH per-connection server daemon (10.0.0.1:60144). Sep 9 23:52:47.461206 systemd-logind[1520]: Removed session 16. Sep 9 23:52:47.500500 containerd[1537]: time="2025-09-09T23:52:47.500449535Z" level=info msg="StopPodSandbox for \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\"" Sep 9 23:52:47.540561 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 60144 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:47.542251 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:47.556941 systemd-logind[1520]: New session 17 of user core. Sep 9 23:52:47.563509 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.559 [WARNING][5894] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.559 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.559 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" iface="eth0" netns="" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.559 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.559 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.585 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.585 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.585 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.595 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.595 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.597 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:47.601788 containerd[1537]: 2025-09-09 23:52:47.599 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.602176 containerd[1537]: time="2025-09-09T23:52:47.601851552Z" level=info msg="TearDown network for sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" successfully" Sep 9 23:52:47.602176 containerd[1537]: time="2025-09-09T23:52:47.601884237Z" level=info msg="StopPodSandbox for \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" returns successfully" Sep 9 23:52:47.603156 containerd[1537]: time="2025-09-09T23:52:47.602763646Z" level=info msg="RemovePodSandbox for \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\"" Sep 9 23:52:47.603156 containerd[1537]: time="2025-09-09T23:52:47.602805573Z" level=info msg="Forcibly stopping sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\"" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.644 [WARNING][5923] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.645 [INFO][5923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.645 [INFO][5923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" iface="eth0" netns="" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.645 [INFO][5923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.645 [INFO][5923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.673 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.673 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.673 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.683 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.683 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" HandleID="k8s-pod-network.7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Workload="localhost-k8s-calico--apiserver--5f79848fcd--kzrk6-eth0" Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.685 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:52:47.690977 containerd[1537]: 2025-09-09 23:52:47.688 [INFO][5923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4" Sep 9 23:52:47.692459 containerd[1537]: time="2025-09-09T23:52:47.691933867Z" level=info msg="TearDown network for sandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" successfully" Sep 9 23:52:47.700781 containerd[1537]: time="2025-09-09T23:52:47.700708516Z" level=info msg="Ensure that sandbox 7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4 in task-service has been cleanup successfully" Sep 9 23:52:47.704026 containerd[1537]: time="2025-09-09T23:52:47.703937670Z" level=info msg="RemovePodSandbox \"7362931df3475841f27f8d7760b349b49fc93a30e144bbeef6c592b4a1305ea4\" returns successfully" Sep 9 23:52:47.715753 sshd[5909]: Connection closed by 10.0.0.1 port 60144 Sep 9 23:52:47.716363 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:47.722525 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:60144.service: Deactivated successfully. Sep 9 23:52:47.725129 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:52:47.726889 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:52:47.730697 systemd-logind[1520]: Removed session 17. Sep 9 23:52:52.733400 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:33156.service - OpenSSH per-connection server daemon (10.0.0.1:33156). Sep 9 23:52:52.784025 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:52.785525 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:52.794507 systemd-logind[1520]: New session 18 of user core. Sep 9 23:52:52.808176 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:52:52.965646 sshd[5964]: Connection closed by 10.0.0.1 port 33156 Sep 9 23:52:52.966165 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:52.972795 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:33156.service: Deactivated successfully. Sep 9 23:52:52.972977 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:52:52.974905 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:52:52.976248 systemd-logind[1520]: Removed session 18. Sep 9 23:52:53.729379 containerd[1537]: time="2025-09-09T23:52:53.729333678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f068ef16691bcce39026810fcadc2d7c97fe4b2d0c6d6f8595543878bb97026\" id:\"d2f694b88ba0f1a28faab542a4d98ac1568a624d039821cb434a0d129504e7e7\" pid:5987 exited_at:{seconds:1757461973 nanos:728785199}" Sep 9 23:52:57.983932 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:33168.service - OpenSSH per-connection server daemon (10.0.0.1:33168). Sep 9 23:52:58.055198 sshd[6004]: Accepted publickey for core from 10.0.0.1 port 33168 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:58.058067 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:58.065187 systemd-logind[1520]: New session 19 of user core. Sep 9 23:52:58.079588 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:52:58.256431 sshd[6007]: Connection closed by 10.0.0.1 port 33168 Sep 9 23:52:58.257403 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:58.263356 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:33168.service: Deactivated successfully. Sep 9 23:52:58.269396 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:52:58.271737 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:52:58.273086 systemd-logind[1520]: Removed session 19. Sep 9 23:53:00.787359 containerd[1537]: time="2025-09-09T23:53:00.787103982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b808a601ca0b904b52c100ca86607b262bfd2a2131c8499b38a559a2818ba29\" id:\"ef54d80ca930963eb300878c6a8f0ae6992f21726e1730d85fb1520ee7a665b9\" pid:6032 exited_at:{seconds:1757461980 nanos:786587674}" Sep 9 23:53:03.269153 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:40804.service - OpenSSH per-connection server daemon (10.0.0.1:40804). Sep 9 23:53:03.344109 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 40804 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:03.345923 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:03.352138 systemd-logind[1520]: New session 20 of user core. Sep 9 23:53:03.365491 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:53:03.623968 sshd[6046]: Connection closed by 10.0.0.1 port 40804 Sep 9 23:53:03.624711 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:03.629167 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:40804.service: Deactivated successfully. Sep 9 23:53:03.631553 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:53:03.634915 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:53:03.635916 systemd-logind[1520]: Removed session 20.