Sep 9 23:51:50.771058 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:51:50.771095 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:51:50.771111 kernel: KASLR enabled Sep 9 23:51:50.771119 kernel: efi: EFI v2.7 by EDK II Sep 9 23:51:50.771128 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 23:51:50.771137 kernel: random: crng init done Sep 9 23:51:50.771151 kernel: secureboot: Secure boot disabled Sep 9 23:51:50.771159 kernel: ACPI: Early table checksum verification disabled Sep 9 23:51:50.771168 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 23:51:50.771178 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:51:50.771188 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771196 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771205 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771211 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771218 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771225 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771231 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771237 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771244 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:51:50.771251 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:51:50.771258 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:51:50.771265 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:50.771271 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 23:51:50.771277 kernel: Zone ranges: Sep 9 23:51:50.771283 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:50.771290 kernel: DMA32 empty Sep 9 23:51:50.771296 kernel: Normal empty Sep 9 23:51:50.771302 kernel: Device empty Sep 9 23:51:50.771307 kernel: Movable zone start for each node Sep 9 23:51:50.771314 kernel: Early memory node ranges Sep 9 23:51:50.771319 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 23:51:50.771325 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 23:51:50.771331 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 23:51:50.771338 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 23:51:50.771344 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 23:51:50.771350 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 23:51:50.771356 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 23:51:50.771367 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 23:51:50.771373 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 23:51:50.771379 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:51:50.771387 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:51:50.771394 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:51:50.771400 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:51:50.771408 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:51:50.771415 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:51:50.771421 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 23:51:50.771427 kernel: psci: probing for conduit method from ACPI. Sep 9 23:51:50.771573 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:51:50.771582 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:51:50.771588 kernel: psci: Trusted OS migration not required Sep 9 23:51:50.771595 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:51:50.771601 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:51:50.771608 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:51:50.771618 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:51:50.771625 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:51:50.771631 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:51:50.771638 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:51:50.771644 kernel: CPU features: detected: Spectre-v4 Sep 9 23:51:50.771650 kernel: CPU features: detected: Spectre-BHB Sep 9 23:51:50.771657 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:51:50.771663 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:51:50.771670 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:51:50.771676 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:51:50.771682 kernel: alternatives: applying boot alternatives Sep 9 23:51:50.771690 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:51:50.771699 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:51:50.771705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:51:50.771712 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:51:50.771718 kernel: Fallback order for Node 0: 0 Sep 9 23:51:50.771725 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:51:50.771731 kernel: Policy zone: DMA Sep 9 23:51:50.771738 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:51:50.771744 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:51:50.771751 kernel: software IO TLB: area num 4. Sep 9 23:51:50.771757 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:51:50.771764 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 23:51:50.771772 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:51:50.771778 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:51:50.771786 kernel: rcu: RCU event tracing is enabled. Sep 9 23:51:50.771792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:51:50.771799 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:51:50.771806 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:51:50.771812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:51:50.771818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:51:50.771825 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:51:50.771831 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:51:50.771838 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:51:50.771846 kernel: GICv3: 256 SPIs implemented Sep 9 23:51:50.771852 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:51:50.771859 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:51:50.771865 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:51:50.771871 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:51:50.771878 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:51:50.771884 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:51:50.771891 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:51:50.771898 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:51:50.771904 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:51:50.771911 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:51:50.771917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:51:50.771934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:50.771942 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:51:50.771949 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:51:50.771955 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:51:50.771962 kernel: arm-pv: using stolen time PV Sep 9 23:51:50.771969 kernel: Console: colour dummy device 80x25 Sep 9 23:51:50.771976 kernel: ACPI: Core revision 20240827 Sep 9 23:51:50.771982 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:51:50.771989 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:51:50.771996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:51:50.772004 kernel: landlock: Up and running. Sep 9 23:51:50.772011 kernel: SELinux: Initializing. Sep 9 23:51:50.772017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:51:50.772029 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:51:50.772035 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:51:50.772042 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:51:50.772049 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:51:50.772056 kernel: Remapping and enabling EFI services. Sep 9 23:51:50.772064 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:51:50.772077 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:51:50.772084 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:51:50.772092 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:51:50.772100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:50.772108 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:51:50.772115 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:51:50.772123 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:51:50.772130 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:51:50.772139 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:50.772146 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:51:50.772153 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:51:50.772160 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:51:50.772168 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:51:50.772175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:51:50.772182 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:51:50.772189 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:51:50.772196 kernel: SMP: Total of 4 processors activated. Sep 9 23:51:50.772205 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:51:50.772212 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:51:50.772219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:51:50.772227 kernel: CPU features: detected: Common not Private translations Sep 9 23:51:50.772234 kernel: CPU features: detected: CRC32 instructions Sep 9 23:51:50.772240 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:51:50.772247 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:51:50.772254 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:51:50.772261 kernel: CPU features: detected: Privileged Access Never Sep 9 23:51:50.772269 kernel: CPU features: detected: RAS Extension Support Sep 9 23:51:50.772276 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:51:50.772283 kernel: alternatives: applying system-wide alternatives Sep 9 23:51:50.772290 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:51:50.772297 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 9 23:51:50.772304 kernel: devtmpfs: initialized Sep 9 23:51:50.772311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:51:50.772318 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:51:50.772325 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:51:50.772334 kernel: 0 pages in range for non-PLT usage Sep 9 23:51:50.772341 kernel: 508576 pages in range for PLT usage Sep 9 23:51:50.772348 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:51:50.772354 kernel: SMBIOS 3.0.0 present. Sep 9 23:51:50.772361 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:51:50.772368 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:51:50.772375 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:51:50.772382 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:51:50.772389 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:51:50.772398 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:51:50.772405 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:51:50.772412 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 9 23:51:50.772419 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:51:50.772426 kernel: cpuidle: using governor menu Sep 9 23:51:50.772443 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:51:50.772450 kernel: ASID allocator initialised with 32768 entries Sep 9 23:51:50.772457 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:51:50.772464 kernel: Serial: AMBA PL011 UART driver Sep 9 23:51:50.772473 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:51:50.772480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:51:50.772487 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:51:50.772494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:51:50.772501 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:51:50.772508 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:51:50.772515 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:51:50.772522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:51:50.772528 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:51:50.772536 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:51:50.772543 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:51:50.772550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:51:50.772557 kernel: ACPI: Interpreter enabled Sep 9 23:51:50.772564 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:51:50.772571 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:51:50.772578 kernel: ACPI: CPU0 has been hot-added Sep 9 23:51:50.772585 kernel: ACPI: CPU1 has been hot-added Sep 9 23:51:50.772592 kernel: ACPI: CPU2 has been hot-added Sep 9 23:51:50.772599 kernel: ACPI: CPU3 has been hot-added Sep 9 23:51:50.772607 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:51:50.772614 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:51:50.772621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:51:50.772789 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:51:50.772860 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:51:50.772921 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:51:50.772992 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:51:50.773056 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:51:50.773066 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:51:50.773073 kernel: PCI host bridge to bus 0000:00 Sep 9 23:51:50.773140 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:51:50.773197 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:51:50.773252 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:51:50.773306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:51:50.773400 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:51:50.773496 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:51:50.773563 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:51:50.773623 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:51:50.773682 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:51:50.773741 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:51:50.773800 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:51:50.773864 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:51:50.773919 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:51:50.773983 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:51:50.774038 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:51:50.774047 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:51:50.774055 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:51:50.774062 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:51:50.774071 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:51:50.774078 kernel: iommu: Default domain type: Translated Sep 9 23:51:50.774085 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:51:50.774092 kernel: efivars: Registered efivars operations Sep 9 23:51:50.774099 kernel: vgaarb: loaded Sep 9 23:51:50.774106 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:51:50.774113 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:51:50.774120 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:51:50.774127 kernel: pnp: PnP ACPI init Sep 9 23:51:50.774194 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:51:50.774204 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:51:50.774211 kernel: NET: Registered PF_INET protocol family Sep 9 23:51:50.774218 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:51:50.774225 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:51:50.774232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:51:50.774239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:51:50.774246 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:51:50.774255 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:51:50.774262 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:51:50.774270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:51:50.774276 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:51:50.774283 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:51:50.774290 kernel: kvm [1]: HYP mode not available Sep 9 23:51:50.774297 kernel: Initialise system trusted keyrings Sep 9 23:51:50.774304 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:51:50.774311 kernel: Key type asymmetric registered Sep 9 23:51:50.774319 kernel: Asymmetric key parser 'x509' registered Sep 9 23:51:50.774327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:51:50.774334 kernel: io scheduler mq-deadline registered Sep 9 23:51:50.774340 kernel: io scheduler kyber registered Sep 9 23:51:50.774347 kernel: io scheduler bfq registered Sep 9 23:51:50.774355 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:51:50.774362 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:51:50.774369 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:51:50.774463 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:51:50.774476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:51:50.774483 kernel: thunder_xcv, ver 1.0 Sep 9 23:51:50.774491 kernel: thunder_bgx, ver 1.0 Sep 9 23:51:50.774498 kernel: nicpf, ver 1.0 Sep 9 23:51:50.774505 kernel: nicvf, ver 1.0 Sep 9 23:51:50.774585 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:51:50.774644 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:51:50 UTC (1757461910) Sep 9 23:51:50.774654 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:51:50.774661 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:51:50.774670 kernel: watchdog: NMI not fully supported Sep 9 23:51:50.774677 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:51:50.774684 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:51:50.774691 kernel: Segment Routing with IPv6 Sep 9 23:51:50.774698 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:51:50.774705 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:51:50.774711 kernel: Key type dns_resolver registered Sep 9 23:51:50.774718 kernel: registered taskstats version 1 Sep 9 23:51:50.774725 kernel: Loading compiled-in X.509 certificates Sep 9 23:51:50.774734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:51:50.774741 kernel: Demotion targets for Node 0: null Sep 9 23:51:50.774748 kernel: Key type .fscrypt registered Sep 9 23:51:50.774755 kernel: Key type fscrypt-provisioning registered Sep 9 23:51:50.774762 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:51:50.774769 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:51:50.774776 kernel: ima: No architecture policies found Sep 9 23:51:50.774783 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:51:50.774791 kernel: clk: Disabling unused clocks Sep 9 23:51:50.774798 kernel: PM: genpd: Disabling unused power domains Sep 9 23:51:50.774805 kernel: Warning: unable to open an initial console. Sep 9 23:51:50.774812 kernel: Freeing unused kernel memory: 38912K Sep 9 23:51:50.774819 kernel: Run /init as init process Sep 9 23:51:50.774826 kernel: with arguments: Sep 9 23:51:50.774832 kernel: /init Sep 9 23:51:50.774839 kernel: with environment: Sep 9 23:51:50.774846 kernel: HOME=/ Sep 9 23:51:50.774853 kernel: TERM=linux Sep 9 23:51:50.774861 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:51:50.774869 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:51:50.774879 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:51:50.774887 systemd[1]: Detected virtualization kvm. Sep 9 23:51:50.774895 systemd[1]: Detected architecture arm64. Sep 9 23:51:50.774902 systemd[1]: Running in initrd. Sep 9 23:51:50.774909 systemd[1]: No hostname configured, using default hostname. Sep 9 23:51:50.774918 systemd[1]: Hostname set to . Sep 9 23:51:50.774931 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:51:50.774939 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:51:50.774947 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:50.774954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:50.774962 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:51:50.774970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:51:50.774977 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:51:50.774987 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:51:50.774996 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:51:50.775004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:51:50.775011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:50.775019 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:50.775026 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:51:50.775034 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:51:50.775043 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:51:50.775051 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:51:50.775058 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:51:50.775066 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:51:50.775073 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:51:50.775081 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:51:50.775088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:50.775096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:50.775105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:50.775113 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:51:50.775121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:51:50.775128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:51:50.775136 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:51:50.775144 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:51:50.775151 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:51:50.775159 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:51:50.775166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:51:50.775175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:50.775183 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:51:50.775191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:50.775199 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:51:50.775208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:51:50.775233 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 23:51:50.775252 systemd-journald[244]: Journal started Sep 9 23:51:50.775272 systemd-journald[244]: Runtime Journal (/run/log/journal/e8c528a755624232a6968d6f920c7f81) is 6M, max 48.5M, 42.4M free. Sep 9 23:51:50.785898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:51:50.785942 kernel: Bridge firewalling registered Sep 9 23:51:50.785953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:50.767319 systemd-modules-load[246]: Inserted module 'overlay' Sep 9 23:51:50.782578 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 9 23:51:50.789559 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:51:50.790715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:50.793481 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:51:50.796514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:51:50.798193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:51:50.799809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:51:50.804992 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:51:50.810005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:50.813481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:50.816199 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:51:50.818963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:50.822893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:51:50.825367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:51:50.827733 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:51:50.846014 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:51:50.861011 systemd-resolved[287]: Positive Trust Anchors: Sep 9 23:51:50.861033 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:51:50.861064 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:51:50.866261 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 9 23:51:50.867304 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:51:50.870070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:50.932464 kernel: SCSI subsystem initialized Sep 9 23:51:50.937448 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:51:50.945478 kernel: iscsi: registered transport (tcp) Sep 9 23:51:50.958464 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:51:50.958489 kernel: QLogic iSCSI HBA Driver Sep 9 23:51:50.976578 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:51:50.999371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:51.001146 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:51:51.054235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:51:51.056460 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:51:51.122481 kernel: raid6: neonx8 gen() 15708 MB/s Sep 9 23:51:51.139466 kernel: raid6: neonx4 gen() 15732 MB/s Sep 9 23:51:51.156473 kernel: raid6: neonx2 gen() 13119 MB/s Sep 9 23:51:51.173471 kernel: raid6: neonx1 gen() 10451 MB/s Sep 9 23:51:51.190471 kernel: raid6: int64x8 gen() 6897 MB/s Sep 9 23:51:51.207462 kernel: raid6: int64x4 gen() 7309 MB/s Sep 9 23:51:51.224454 kernel: raid6: int64x2 gen() 6090 MB/s Sep 9 23:51:51.241471 kernel: raid6: int64x1 gen() 5031 MB/s Sep 9 23:51:51.241512 kernel: raid6: using algorithm neonx4 gen() 15732 MB/s Sep 9 23:51:51.258481 kernel: raid6: .... xor() 12322 MB/s, rmw enabled Sep 9 23:51:51.258551 kernel: raid6: using neon recovery algorithm Sep 9 23:51:51.263543 kernel: xor: measuring software checksum speed Sep 9 23:51:51.263581 kernel: 8regs : 21584 MB/sec Sep 9 23:51:51.264613 kernel: 32regs : 21676 MB/sec Sep 9 23:51:51.264627 kernel: arm64_neon : 28061 MB/sec Sep 9 23:51:51.264635 kernel: xor: using function: arm64_neon (28061 MB/sec) Sep 9 23:51:51.318466 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:51:51.327366 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:51:51.331299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:51.367786 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 9 23:51:51.374009 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:51.376161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:51:51.410783 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Sep 9 23:51:51.439508 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:51:51.441792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:51:51.493377 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:51.498862 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:51:51.548761 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:51:51.553800 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:51:51.556497 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:51:51.556514 kernel: GPT:9289727 != 19775487 Sep 9 23:51:51.556523 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:51:51.556531 kernel: GPT:9289727 != 19775487 Sep 9 23:51:51.556545 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:51:51.557499 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:51.569479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:51:51.569602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:51.576311 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:51.578760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:51.593232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:51:51.609773 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:51:51.611051 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:51:51.612638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:51.623921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:51:51.634351 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:51:51.635470 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:51:51.638141 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:51:51.639908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:51.641476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:51:51.643887 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:51:51.645506 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:51:51.667132 disk-uuid[591]: Primary Header is updated. Sep 9 23:51:51.667132 disk-uuid[591]: Secondary Entries is updated. Sep 9 23:51:51.667132 disk-uuid[591]: Secondary Header is updated. Sep 9 23:51:51.672457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:51.675457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:51.675608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:51:52.719462 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:51:52.719793 disk-uuid[594]: The operation has completed successfully. Sep 9 23:51:52.739254 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:51:52.739352 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:51:52.770854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:51:52.788990 sh[610]: Success Sep 9 23:51:52.804341 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:51:52.804420 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:51:52.804457 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:51:52.811460 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:51:52.839530 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:51:52.842425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:51:52.854960 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:51:52.860459 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (622) Sep 9 23:51:52.862711 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:51:52.862732 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:52.866795 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:51:52.866833 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:51:52.868034 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:51:52.869499 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:51:52.870755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:51:52.871590 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:51:52.874735 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:51:52.899453 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 9 23:51:52.901447 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:52.901493 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:52.904448 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:52.904503 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:52.910470 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:52.911085 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:51:52.914662 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:51:52.994174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:51:52.997464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:51:53.024476 ignition[696]: Ignition 2.21.0 Sep 9 23:51:53.024485 ignition[696]: Stage: fetch-offline Sep 9 23:51:53.024518 ignition[696]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:53.024526 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:53.024691 ignition[696]: parsed url from cmdline: "" Sep 9 23:51:53.024694 ignition[696]: no config URL provided Sep 9 23:51:53.024699 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:51:53.024706 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:51:53.024726 ignition[696]: op(1): [started] loading QEMU firmware config module Sep 9 23:51:53.024730 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:51:53.030711 ignition[696]: op(1): [finished] loading QEMU firmware config module Sep 9 23:51:53.035620 systemd-networkd[803]: lo: Link UP Sep 9 23:51:53.035633 systemd-networkd[803]: lo: Gained carrier Sep 9 23:51:53.036314 systemd-networkd[803]: Enumeration completed Sep 9 23:51:53.036464 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:51:53.037070 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:53.037075 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:51:53.037460 systemd[1]: Reached target network.target - Network. Sep 9 23:51:53.037816 systemd-networkd[803]: eth0: Link UP Sep 9 23:51:53.037934 systemd-networkd[803]: eth0: Gained carrier Sep 9 23:51:53.037944 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:53.068514 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:51:53.084001 ignition[696]: parsing config with SHA512: f4462134616a4a07d1f0c5f9370f6ad292eec4f0b7ca03e5087f9eb7ad3cb102b436f49eb8303eb1b4fa62a10bd9aaf0d2a6e5ac8b23cb7e25a4143c57377051 Sep 9 23:51:53.090380 unknown[696]: fetched base config from "system" Sep 9 23:51:53.090392 unknown[696]: fetched user config from "qemu" Sep 9 23:51:53.090754 ignition[696]: fetch-offline: fetch-offline passed Sep 9 23:51:53.093166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:51:53.090815 ignition[696]: Ignition finished successfully Sep 9 23:51:53.094333 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:51:53.095092 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:51:53.126719 ignition[810]: Ignition 2.21.0 Sep 9 23:51:53.126735 ignition[810]: Stage: kargs Sep 9 23:51:53.126887 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:53.126897 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:53.128812 ignition[810]: kargs: kargs passed Sep 9 23:51:53.128893 ignition[810]: Ignition finished successfully Sep 9 23:51:53.133107 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:51:53.135159 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:51:53.168650 ignition[818]: Ignition 2.21.0 Sep 9 23:51:53.168664 ignition[818]: Stage: disks Sep 9 23:51:53.168814 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:53.168822 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:53.170626 ignition[818]: disks: disks passed Sep 9 23:51:53.170694 ignition[818]: Ignition finished successfully Sep 9 23:51:53.172981 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:51:53.174389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:51:53.175701 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:51:53.177383 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:51:53.179120 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:51:53.180563 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:51:53.182812 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:51:53.213531 systemd-resolved[287]: Detected conflict on linux IN A 10.0.0.91 Sep 9 23:51:53.213546 systemd-resolved[287]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 9 23:51:53.216014 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:51:53.221183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:51:53.225337 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:51:53.304294 kernel: EXT4-fs (vda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:51:53.305098 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:51:53.313702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:51:53.316174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:51:53.336058 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:51:53.336960 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:51:53.337007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:51:53.347716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 9 23:51:53.347740 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:53.347750 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:53.337031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:51:53.352625 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:53.352692 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:53.354066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:51:53.360069 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:51:53.362038 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:51:53.406295 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:51:53.411486 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:51:53.416172 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:51:53.421588 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:51:53.502593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:51:53.508730 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:51:53.526150 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:51:53.532501 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:53.549735 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:51:53.553209 ignition[949]: INFO : Ignition 2.21.0 Sep 9 23:51:53.553209 ignition[949]: INFO : Stage: mount Sep 9 23:51:53.554549 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:53.554549 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:53.559006 ignition[949]: INFO : mount: mount passed Sep 9 23:51:53.559006 ignition[949]: INFO : Ignition finished successfully Sep 9 23:51:53.557955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:51:53.560796 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:51:53.860340 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:51:53.861814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:51:53.886456 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 9 23:51:53.888206 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:51:53.888245 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:51:53.890477 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:51:53.890509 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:51:53.891933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:51:53.923744 ignition[980]: INFO : Ignition 2.21.0 Sep 9 23:51:53.923744 ignition[980]: INFO : Stage: files Sep 9 23:51:53.925488 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:53.925488 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:53.928355 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:51:53.928355 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:51:53.928355 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:51:53.933263 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:51:53.933263 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:51:53.933263 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:51:53.933263 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:51:53.933263 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 23:51:53.929019 unknown[980]: wrote ssh authorized keys file for user: core Sep 9 23:51:54.004068 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:51:54.386749 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 23:51:54.389178 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:51:54.391394 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:51:54.403784 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 23:51:54.798776 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 23:51:54.850621 systemd-networkd[803]: eth0: Gained IPv6LL Sep 9 23:51:55.329772 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 23:51:55.329772 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 23:51:55.333748 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:51:55.338101 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:51:55.338101 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 23:51:55.338101 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 23:51:55.344514 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:51:55.344514 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:51:55.344514 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 23:51:55.344514 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:51:55.362980 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:51:55.367206 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:51:55.370566 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:51:55.370566 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:51:55.370566 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:51:55.370566 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:51:55.370566 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:51:55.370566 ignition[980]: INFO : files: files passed Sep 9 23:51:55.370566 ignition[980]: INFO : Ignition finished successfully Sep 9 23:51:55.371324 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:51:55.376604 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:51:55.378650 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:51:55.388103 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:51:55.388212 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:51:55.391051 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:51:55.393246 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:55.393246 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:55.395935 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:51:55.397034 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:51:55.398975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:51:55.402579 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:51:55.437711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:51:55.437823 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:51:55.440255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:51:55.441699 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:51:55.443342 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:51:55.444364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:51:55.469006 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:51:55.472250 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:51:55.503276 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:55.504384 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:55.506834 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:51:55.508502 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:51:55.508634 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:51:55.511243 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:51:55.512167 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:51:55.513771 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:51:55.515305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:51:55.516879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:51:55.518834 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:51:55.520483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:51:55.522120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:51:55.523747 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:51:55.525300 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:51:55.527649 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:51:55.528989 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:51:55.529124 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:51:55.531862 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:55.533311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:55.534854 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:51:55.535536 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:55.537394 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:51:55.537526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:51:55.539976 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:51:55.540084 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:51:55.541522 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:51:55.542803 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:51:55.543534 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:55.545501 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:51:55.546858 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:51:55.548256 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:51:55.548338 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:51:55.550165 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:51:55.550232 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:51:55.551420 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:51:55.551570 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:51:55.554297 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:51:55.554400 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:51:55.556305 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:51:55.558053 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:51:55.558856 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:51:55.558971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:55.560510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:51:55.560593 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:51:55.566011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:51:55.569640 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:51:55.581612 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:51:55.585460 ignition[1036]: INFO : Ignition 2.21.0 Sep 9 23:51:55.585460 ignition[1036]: INFO : Stage: umount Sep 9 23:51:55.585460 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:51:55.585460 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:51:55.588870 ignition[1036]: INFO : umount: umount passed Sep 9 23:51:55.588870 ignition[1036]: INFO : Ignition finished successfully Sep 9 23:51:55.591614 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:51:55.592493 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:51:55.594663 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:51:55.595479 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:51:55.596701 systemd[1]: Stopped target network.target - Network. Sep 9 23:51:55.597485 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:51:55.597552 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:51:55.598814 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:51:55.598853 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:51:55.601224 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:51:55.601287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:51:55.602126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:51:55.602165 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:51:55.603787 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:51:55.603836 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:51:55.605474 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:51:55.606968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:51:55.614251 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:51:55.614401 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:51:55.618113 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:51:55.618528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:51:55.618582 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:55.621692 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:51:55.623729 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:51:55.624526 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:51:55.627070 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:51:55.628243 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:51:55.629977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:51:55.630009 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:55.631614 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:51:55.632990 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:51:55.633041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:51:55.634616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:51:55.634658 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:55.637013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:51:55.637057 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:55.638661 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:55.641530 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:51:55.653863 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:51:55.654125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:55.656565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:51:55.656632 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:55.657901 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:51:55.657947 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:55.659493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:51:55.659544 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:51:55.661844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:51:55.661889 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:51:55.664151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:51:55.664205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:51:55.667641 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:51:55.669178 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:51:55.669235 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:55.671822 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:51:55.671868 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:55.674419 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:51:55.674479 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:51:55.677261 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:51:55.677307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:55.679656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:51:55.679696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:55.683126 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:51:55.684478 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:51:55.686975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:51:55.687078 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:51:55.688900 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:51:55.691195 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:51:55.709930 systemd[1]: Switching root. Sep 9 23:51:55.751717 systemd-journald[244]: Journal stopped Sep 9 23:51:56.577797 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 23:51:56.577847 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:51:56.577866 kernel: SELinux: policy capability open_perms=1 Sep 9 23:51:56.577878 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:51:56.577887 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:51:56.577897 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:51:56.577921 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:51:56.577933 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:51:56.577948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:51:56.577958 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:51:56.577968 kernel: audit: type=1403 audit(1757461915.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:51:56.577979 systemd[1]: Successfully loaded SELinux policy in 81.717ms. Sep 9 23:51:56.577998 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.888ms. Sep 9 23:51:56.578009 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:51:56.578020 systemd[1]: Detected virtualization kvm. Sep 9 23:51:56.578031 systemd[1]: Detected architecture arm64. Sep 9 23:51:56.578041 systemd[1]: Detected first boot. Sep 9 23:51:56.578051 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:51:56.578061 zram_generator::config[1080]: No configuration found. Sep 9 23:51:56.578072 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:51:56.578082 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:51:56.578092 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:51:56.578106 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:51:56.578117 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:51:56.578127 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:51:56.578138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:51:56.578147 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:51:56.578158 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:51:56.578168 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:51:56.578178 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:51:56.578190 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:51:56.578199 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:51:56.578211 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:51:56.578221 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:51:56.578232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:51:56.578242 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:51:56.578252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:51:56.578263 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:51:56.578273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:51:56.578283 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:51:56.578295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:51:56.578305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:51:56.578317 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:51:56.578327 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:51:56.578338 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:51:56.578348 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:51:56.578358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:51:56.578368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:51:56.578380 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:51:56.578390 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:51:56.578401 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:51:56.578414 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:51:56.578425 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:51:56.578445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:51:56.578457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:51:56.578468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:51:56.578478 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:51:56.578489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:51:56.578501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:51:56.578513 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:51:56.578523 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:51:56.578534 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:51:56.578544 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:51:56.578555 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:51:56.578566 systemd[1]: Reached target machines.target - Containers. Sep 9 23:51:56.578576 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:51:56.578589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:56.578600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:51:56.578611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:51:56.578621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:56.578631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:51:56.578642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:56.578653 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:51:56.578662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:56.578673 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:51:56.578684 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:51:56.578694 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:51:56.578704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:51:56.578714 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:51:56.578725 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:56.578736 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:51:56.578746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:51:56.578757 kernel: loop: module loaded Sep 9 23:51:56.578768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:51:56.578778 kernel: ACPI: bus type drm_connector registered Sep 9 23:51:56.578787 kernel: fuse: init (API version 7.41) Sep 9 23:51:56.578797 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:51:56.578807 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:51:56.578817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:51:56.578829 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:51:56.578839 systemd[1]: Stopped verity-setup.service. Sep 9 23:51:56.578850 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:51:56.578860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:51:56.578871 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:51:56.578881 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:51:56.578891 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:51:56.578933 systemd-journald[1149]: Collecting audit messages is disabled. Sep 9 23:51:56.578958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:51:56.578970 systemd-journald[1149]: Journal started Sep 9 23:51:56.578993 systemd-journald[1149]: Runtime Journal (/run/log/journal/e8c528a755624232a6968d6f920c7f81) is 6M, max 48.5M, 42.4M free. Sep 9 23:51:56.354351 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:51:56.363700 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:51:56.364124 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:51:56.581331 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:51:56.583486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:51:56.585106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:51:56.587782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:51:56.587985 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:51:56.589498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:56.589657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:56.591844 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:51:56.592040 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:51:56.593308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:56.593535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:56.594781 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:51:56.594969 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:51:56.596416 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:56.596622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:56.597985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:51:56.599308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:51:56.601050 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:51:56.602685 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:51:56.616277 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:51:56.618712 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:51:56.620882 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:51:56.621933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:51:56.621977 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:51:56.623720 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:51:56.632371 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:51:56.633526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:56.635162 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:51:56.637488 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:51:56.638731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:51:56.641830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:51:56.642986 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:51:56.644312 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:51:56.647523 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:51:56.658747 systemd-journald[1149]: Time spent on flushing to /var/log/journal/e8c528a755624232a6968d6f920c7f81 is 17.002ms for 889 entries. Sep 9 23:51:56.658747 systemd-journald[1149]: System Journal (/var/log/journal/e8c528a755624232a6968d6f920c7f81) is 8M, max 195.6M, 187.6M free. Sep 9 23:51:56.684621 systemd-journald[1149]: Received client request to flush runtime journal. Sep 9 23:51:56.684666 kernel: loop0: detected capacity change from 0 to 100608 Sep 9 23:51:56.650106 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:51:56.655304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:51:56.656818 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:51:56.658866 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:51:56.661707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:51:56.669325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:51:56.672194 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:51:56.680058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:51:56.687161 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:51:56.707930 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:51:56.707026 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 9 23:51:56.707037 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Sep 9 23:51:56.707608 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:51:56.711533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:51:56.715320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:51:56.740711 kernel: loop1: detected capacity change from 0 to 119320 Sep 9 23:51:56.754497 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:51:56.757267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:51:56.766455 kernel: loop2: detected capacity change from 0 to 203944 Sep 9 23:51:56.777036 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 23:51:56.777367 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 23:51:56.782478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:51:56.799530 kernel: loop3: detected capacity change from 0 to 100608 Sep 9 23:51:56.805485 kernel: loop4: detected capacity change from 0 to 119320 Sep 9 23:51:56.811463 kernel: loop5: detected capacity change from 0 to 203944 Sep 9 23:51:56.817111 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:51:56.817588 (sd-merge)[1222]: Merged extensions into '/usr'. Sep 9 23:51:56.821796 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:51:56.821823 systemd[1]: Reloading... Sep 9 23:51:56.879475 zram_generator::config[1247]: No configuration found. Sep 9 23:51:56.955103 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:51:57.016985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:51:57.017507 systemd[1]: Reloading finished in 193 ms. Sep 9 23:51:57.035066 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:51:57.038472 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:51:57.054723 systemd[1]: Starting ensure-sysext.service... Sep 9 23:51:57.056748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:51:57.066800 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:51:57.066817 systemd[1]: Reloading... Sep 9 23:51:57.071914 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:51:57.071948 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:51:57.072202 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:51:57.072387 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:51:57.073027 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:51:57.073245 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 9 23:51:57.073292 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 9 23:51:57.076145 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:51:57.076159 systemd-tmpfiles[1283]: Skipping /boot Sep 9 23:51:57.082271 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:51:57.082289 systemd-tmpfiles[1283]: Skipping /boot Sep 9 23:51:57.122474 zram_generator::config[1310]: No configuration found. Sep 9 23:51:57.256068 systemd[1]: Reloading finished in 188 ms. Sep 9 23:51:57.278278 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:51:57.285306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:51:57.299702 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:51:57.302728 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:51:57.305123 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:51:57.309652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:51:57.315681 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:51:57.319513 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:51:57.327130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:57.329102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:57.333054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:57.337088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:57.338144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:57.338295 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:57.341149 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:51:57.356893 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:51:57.360101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:57.360269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:57.362146 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:57.362310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:57.362420 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 9 23:51:57.370940 augenrules[1376]: No rules Sep 9 23:51:57.371722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:57.374039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:57.376220 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:51:57.377514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:51:57.379590 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:51:57.389846 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:51:57.393467 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:51:57.395957 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:51:57.406494 systemd[1]: Finished ensure-sysext.service. Sep 9 23:51:57.412873 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:51:57.414948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:51:57.427096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:51:57.431712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:51:57.433998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:51:57.438367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:51:57.440537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:51:57.440699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:51:57.444130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:51:57.448010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:51:57.455675 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:51:57.456803 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:51:57.457424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:51:57.457634 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:51:57.461396 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:51:57.464087 augenrules[1418]: /sbin/augenrules: No change Sep 9 23:51:57.466783 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:51:57.471283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:51:57.472319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:51:57.473934 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:51:57.474131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:51:57.480014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:51:57.480075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:51:57.482078 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:51:57.483255 augenrules[1450]: No rules Sep 9 23:51:57.484701 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:51:57.484965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:51:57.493341 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:51:57.536157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:51:57.538836 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:51:57.568923 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:51:57.629804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:51:57.651224 systemd-networkd[1431]: lo: Link UP Sep 9 23:51:57.651242 systemd-networkd[1431]: lo: Gained carrier Sep 9 23:51:57.652182 systemd-networkd[1431]: Enumeration completed Sep 9 23:51:57.652639 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:57.652649 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:51:57.653294 systemd-networkd[1431]: eth0: Link UP Sep 9 23:51:57.653404 systemd-networkd[1431]: eth0: Gained carrier Sep 9 23:51:57.653424 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:51:57.653738 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:51:57.658675 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:51:57.661704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:51:57.667526 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:51:57.679761 systemd-resolved[1351]: Positive Trust Anchors: Sep 9 23:51:57.680086 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:51:57.680162 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:51:57.682481 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:51:57.687622 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 9 23:51:57.689131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:51:57.690758 systemd[1]: Reached target network.target - Network. Sep 9 23:51:57.691536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:51:57.696100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:51:57.698070 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:51:57.698871 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:51:57.698941 systemd-timesyncd[1433]: Initial clock synchronization to Tue 2025-09-09 23:51:57.652320 UTC. Sep 9 23:51:57.699782 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:51:57.700991 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:51:57.702373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:51:57.703934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:51:57.705304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:51:57.705347 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:51:57.706414 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:51:57.707782 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:51:57.709128 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:51:57.710469 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:51:57.712364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:51:57.714942 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:51:57.717708 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:51:57.719046 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:51:57.720114 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:51:57.723240 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:51:57.724582 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:51:57.726232 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:51:57.727354 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:51:57.728287 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:51:57.729130 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:51:57.729159 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:51:57.730292 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:51:57.732426 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:51:57.734346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:51:57.736551 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:51:57.738556 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:51:57.739495 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:51:57.742527 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:51:57.744452 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:51:57.746148 jq[1497]: false Sep 9 23:51:57.746541 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:51:57.748763 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:51:57.751976 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:51:57.755510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:51:57.756059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:51:57.756755 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:51:57.762149 extend-filesystems[1498]: Found /dev/vda6 Sep 9 23:51:57.762576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:51:57.766936 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:51:57.768014 extend-filesystems[1498]: Found /dev/vda9 Sep 9 23:51:57.769956 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:51:57.770173 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:51:57.771171 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:51:57.771348 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:51:57.772462 extend-filesystems[1498]: Checking size of /dev/vda9 Sep 9 23:51:57.777768 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:51:57.782067 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:51:57.789382 jq[1510]: true Sep 9 23:51:57.795813 tar[1519]: linux-arm64/helm Sep 9 23:51:57.800518 extend-filesystems[1498]: Resized partition /dev/vda9 Sep 9 23:51:57.798381 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:51:57.808421 update_engine[1508]: I20250909 23:51:57.806043 1508 main.cc:92] Flatcar Update Engine starting Sep 9 23:51:57.808745 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:51:57.813698 jq[1535]: true Sep 9 23:51:57.820640 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:51:57.841573 dbus-daemon[1495]: [system] SELinux support is enabled Sep 9 23:51:57.841798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:51:57.846894 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:51:57.846942 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:51:57.848837 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:51:57.848863 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:51:57.855868 update_engine[1508]: I20250909 23:51:57.853636 1508 update_check_scheduler.cc:74] Next update check in 2m55s Sep 9 23:51:57.856084 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:51:57.861563 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:51:57.864667 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:51:57.865399 systemd-logind[1506]: New seat seat0. Sep 9 23:51:57.868682 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:51:57.903454 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:51:57.924917 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:51:57.925504 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:51:57.925504 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:51:57.925504 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:51:57.930815 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Sep 9 23:51:57.926959 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:51:57.927420 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:51:57.933959 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:51:57.935894 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:51:57.940787 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:51:58.006068 containerd[1536]: time="2025-09-09T23:51:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:51:58.006744 containerd[1536]: time="2025-09-09T23:51:58.006682833Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:51:58.018034 containerd[1536]: time="2025-09-09T23:51:58.017975700Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.885µs" Sep 9 23:51:58.018034 containerd[1536]: time="2025-09-09T23:51:58.018018575Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:51:58.018034 containerd[1536]: time="2025-09-09T23:51:58.018042546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:51:58.018256 containerd[1536]: time="2025-09-09T23:51:58.018224496Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:51:58.018256 containerd[1536]: time="2025-09-09T23:51:58.018247588Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:51:58.018326 containerd[1536]: time="2025-09-09T23:51:58.018277501Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018345 containerd[1536]: time="2025-09-09T23:51:58.018331823Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018367 containerd[1536]: time="2025-09-09T23:51:58.018345383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018629 containerd[1536]: time="2025-09-09T23:51:58.018591108Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018629 containerd[1536]: time="2025-09-09T23:51:58.018614520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018629 containerd[1536]: time="2025-09-09T23:51:58.018626325Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018700 containerd[1536]: time="2025-09-09T23:51:58.018635139Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018734 containerd[1536]: time="2025-09-09T23:51:58.018715904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018952 containerd[1536]: time="2025-09-09T23:51:58.018923460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:51:58.018996 containerd[1536]: time="2025-09-09T23:51:58.018981491Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:51:58.019020 containerd[1536]: time="2025-09-09T23:51:58.018995410Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:51:58.019045 containerd[1536]: time="2025-09-09T23:51:58.019034656Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:51:58.019362 containerd[1536]: time="2025-09-09T23:51:58.019315478Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:51:58.019487 containerd[1536]: time="2025-09-09T23:51:58.019466877Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:51:58.029867 containerd[1536]: time="2025-09-09T23:51:58.029773496Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:51:58.030009 containerd[1536]: time="2025-09-09T23:51:58.029991541Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:51:58.030113 containerd[1536]: time="2025-09-09T23:51:58.030099387Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:51:58.030204 containerd[1536]: time="2025-09-09T23:51:58.030187610Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:51:58.030257 containerd[1536]: time="2025-09-09T23:51:58.030245322Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:51:58.030306 containerd[1536]: time="2025-09-09T23:51:58.030294898Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:51:58.030361 containerd[1536]: time="2025-09-09T23:51:58.030348621Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:51:58.030422 containerd[1536]: time="2025-09-09T23:51:58.030407330Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:51:58.030496 containerd[1536]: time="2025-09-09T23:51:58.030481714Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:51:58.030565 containerd[1536]: time="2025-09-09T23:51:58.030552348Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:51:58.030614 containerd[1536]: time="2025-09-09T23:51:58.030602681Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:51:58.030668 containerd[1536]: time="2025-09-09T23:51:58.030655846Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:51:58.030904 containerd[1536]: time="2025-09-09T23:51:58.030881110Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:51:58.030979 containerd[1536]: time="2025-09-09T23:51:58.030965185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:51:58.031043 containerd[1536]: time="2025-09-09T23:51:58.031022818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:51:58.031096 containerd[1536]: time="2025-09-09T23:51:58.031083561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:51:58.031164 containerd[1536]: time="2025-09-09T23:51:58.031149927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:51:58.031215 containerd[1536]: time="2025-09-09T23:51:58.031203491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:51:58.031273 containerd[1536]: time="2025-09-09T23:51:58.031260007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:51:58.031323 containerd[1536]: time="2025-09-09T23:51:58.031311616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:51:58.031393 containerd[1536]: time="2025-09-09T23:51:58.031377704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:51:58.031477 containerd[1536]: time="2025-09-09T23:51:58.031463255Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:51:58.031537 containerd[1536]: time="2025-09-09T23:51:58.031523918Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:51:58.033114 containerd[1536]: time="2025-09-09T23:51:58.033084492Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:51:58.033326 containerd[1536]: time="2025-09-09T23:51:58.033289216Z" level=info msg="Start snapshots syncer" Sep 9 23:51:58.033419 containerd[1536]: time="2025-09-09T23:51:58.033402367Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:51:58.034492 containerd[1536]: time="2025-09-09T23:51:58.033724269Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:51:58.034492 containerd[1536]: time="2025-09-09T23:51:58.033783616Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.033896806Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034056860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034080631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034091958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034104043Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034115130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034126537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034137864Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034164387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034184568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034196653Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034230833Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034249340Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:51:58.034624 containerd[1536]: time="2025-09-09T23:51:58.034258752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034269800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034277498Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034287628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034298596Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034390608Z" level=info msg="runtime interface created" Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034395713Z" level=info msg="created NRI interface" Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034404049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034415575Z" level=info msg="Connect containerd service" Sep 9 23:51:58.034883 containerd[1536]: time="2025-09-09T23:51:58.034494785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:51:58.035354 containerd[1536]: time="2025-09-09T23:51:58.035321098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116275939Z" level=info msg="Start subscribing containerd event" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116348009Z" level=info msg="Start recovering state" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116457849Z" level=info msg="Start event monitor" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116473124Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116481460Z" level=info msg="Start streaming server" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116492827Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116500485Z" level=info msg="runtime interface starting up..." Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116507863Z" level=info msg="starting plugins..." Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116522580Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116308245Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116678167Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:51:58.118503 containerd[1536]: time="2025-09-09T23:51:58.116818997Z" level=info msg="containerd successfully booted in 0.111108s" Sep 9 23:51:58.116856 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:51:58.125489 tar[1519]: linux-arm64/LICENSE Sep 9 23:51:58.125595 tar[1519]: linux-arm64/README.md Sep 9 23:51:58.143246 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:51:58.830312 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:51:58.851260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:51:58.857680 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:51:58.883536 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:51:58.883744 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:51:58.889344 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:51:58.918551 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:51:58.921758 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:51:58.924186 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:51:58.925424 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:51:59.587229 systemd-networkd[1431]: eth0: Gained IPv6LL Sep 9 23:51:59.593783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:51:59.597139 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:51:59.600070 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:51:59.603969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:51:59.606744 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:51:59.640960 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:51:59.653442 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:51:59.653673 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:51:59.655290 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:52:00.288194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:00.289859 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:52:00.295552 systemd[1]: Startup finished in 2.086s (kernel) + 5.308s (initrd) + 4.418s (userspace) = 11.813s. Sep 9 23:52:00.306840 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:52:00.747372 kubelet[1628]: E0909 23:52:00.747287 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:52:00.750517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:52:00.750651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:52:00.751246 systemd[1]: kubelet.service: Consumed 795ms CPU time, 257.3M memory peak. Sep 9 23:52:03.796599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:52:03.797816 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:46360.service - OpenSSH per-connection server daemon (10.0.0.1:46360). Sep 9 23:52:03.909654 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:03.911773 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:03.930618 systemd-logind[1506]: New session 1 of user core. Sep 9 23:52:03.931359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:52:03.932493 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:52:03.967206 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:52:03.970809 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:52:04.008863 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:52:04.012236 systemd-logind[1506]: New session c1 of user core. Sep 9 23:52:04.162587 systemd[1646]: Queued start job for default target default.target. Sep 9 23:52:04.183710 systemd[1646]: Created slice app.slice - User Application Slice. Sep 9 23:52:04.183746 systemd[1646]: Reached target paths.target - Paths. Sep 9 23:52:04.183835 systemd[1646]: Reached target timers.target - Timers. Sep 9 23:52:04.186592 systemd[1646]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:52:04.196652 systemd[1646]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:52:04.196791 systemd[1646]: Reached target sockets.target - Sockets. Sep 9 23:52:04.196840 systemd[1646]: Reached target basic.target - Basic System. Sep 9 23:52:04.196871 systemd[1646]: Reached target default.target - Main User Target. Sep 9 23:52:04.196897 systemd[1646]: Startup finished in 174ms. Sep 9 23:52:04.197010 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:52:04.198700 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:52:04.270239 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:46374.service - OpenSSH per-connection server daemon (10.0.0.1:46374). Sep 9 23:52:04.328545 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:04.330135 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:04.334550 systemd-logind[1506]: New session 2 of user core. Sep 9 23:52:04.343706 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:52:04.405862 sshd[1660]: Connection closed by 10.0.0.1 port 46374 Sep 9 23:52:04.406216 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:04.422100 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:46374.service: Deactivated successfully. Sep 9 23:52:04.423699 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:52:04.424586 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:52:04.426490 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:46376.service - OpenSSH per-connection server daemon (10.0.0.1:46376). Sep 9 23:52:04.427327 systemd-logind[1506]: Removed session 2. Sep 9 23:52:04.502739 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 46376 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:04.504412 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:04.509660 systemd-logind[1506]: New session 3 of user core. Sep 9 23:52:04.521668 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:52:04.576497 sshd[1669]: Connection closed by 10.0.0.1 port 46376 Sep 9 23:52:04.576310 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:04.590143 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:46376.service: Deactivated successfully. Sep 9 23:52:04.594348 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:52:04.596185 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:52:04.599027 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:46384.service - OpenSSH per-connection server daemon (10.0.0.1:46384). Sep 9 23:52:04.600314 systemd-logind[1506]: Removed session 3. Sep 9 23:52:04.680378 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 46384 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:04.682485 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:04.686885 systemd-logind[1506]: New session 4 of user core. Sep 9 23:52:04.702694 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:52:04.758884 sshd[1678]: Connection closed by 10.0.0.1 port 46384 Sep 9 23:52:04.759611 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:04.776335 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:46384.service: Deactivated successfully. Sep 9 23:52:04.779126 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:52:04.780705 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:52:04.784311 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:46386.service - OpenSSH per-connection server daemon (10.0.0.1:46386). Sep 9 23:52:04.787073 systemd-logind[1506]: Removed session 4. Sep 9 23:52:04.851493 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 46386 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:04.852394 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:04.860499 systemd-logind[1506]: New session 5 of user core. Sep 9 23:52:04.871756 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:52:04.936299 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:52:04.936729 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:52:04.953554 sudo[1688]: pam_unix(sudo:session): session closed for user root Sep 9 23:52:04.956903 sshd[1687]: Connection closed by 10.0.0.1 port 46386 Sep 9 23:52:04.956578 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:04.971789 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:46386.service: Deactivated successfully. Sep 9 23:52:04.975026 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:52:04.977696 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:52:04.984371 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:46398.service - OpenSSH per-connection server daemon (10.0.0.1:46398). Sep 9 23:52:04.985452 systemd-logind[1506]: Removed session 5. Sep 9 23:52:05.040633 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:05.042216 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:05.050412 systemd-logind[1506]: New session 6 of user core. Sep 9 23:52:05.059685 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:52:05.112526 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:52:05.113185 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:52:05.249780 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 9 23:52:05.255062 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:52:05.255321 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:52:05.268884 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:52:05.313029 augenrules[1721]: No rules Sep 9 23:52:05.314109 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:52:05.314352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:52:05.316057 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 9 23:52:05.317484 sshd[1697]: Connection closed by 10.0.0.1 port 46398 Sep 9 23:52:05.318602 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:05.329717 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:46398.service: Deactivated successfully. Sep 9 23:52:05.331595 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:52:05.332400 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:52:05.336376 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:46414.service - OpenSSH per-connection server daemon (10.0.0.1:46414). Sep 9 23:52:05.337673 systemd-logind[1506]: Removed session 6. Sep 9 23:52:05.393284 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 46414 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:52:05.394014 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:52:05.404779 systemd-logind[1506]: New session 7 of user core. Sep 9 23:52:05.413669 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:52:05.470787 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:52:05.471076 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:52:05.844298 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:52:05.860854 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:52:06.091524 dockerd[1754]: time="2025-09-09T23:52:06.090583716Z" level=info msg="Starting up" Sep 9 23:52:06.092401 dockerd[1754]: time="2025-09-09T23:52:06.092356518Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:52:06.108697 dockerd[1754]: time="2025-09-09T23:52:06.108587820Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:52:06.149995 dockerd[1754]: time="2025-09-09T23:52:06.149923419Z" level=info msg="Loading containers: start." Sep 9 23:52:06.163469 kernel: Initializing XFRM netlink socket Sep 9 23:52:06.391689 systemd-networkd[1431]: docker0: Link UP Sep 9 23:52:06.397361 dockerd[1754]: time="2025-09-09T23:52:06.396832855Z" level=info msg="Loading containers: done." Sep 9 23:52:06.413246 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck374864731-merged.mount: Deactivated successfully. Sep 9 23:52:06.416899 dockerd[1754]: time="2025-09-09T23:52:06.416825946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 23:52:06.417019 dockerd[1754]: time="2025-09-09T23:52:06.416928408Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 23:52:06.417043 dockerd[1754]: time="2025-09-09T23:52:06.417023762Z" level=info msg="Initializing buildkit" Sep 9 23:52:06.442254 dockerd[1754]: time="2025-09-09T23:52:06.442204527Z" level=info msg="Completed buildkit initialization" Sep 9 23:52:06.450311 dockerd[1754]: time="2025-09-09T23:52:06.450255351Z" level=info msg="Daemon has completed initialization" Sep 9 23:52:06.450966 dockerd[1754]: time="2025-09-09T23:52:06.450814460Z" level=info msg="API listen on /run/docker.sock" Sep 9 23:52:06.450500 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 23:52:07.035589 containerd[1536]: time="2025-09-09T23:52:07.035546758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 23:52:07.584382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599142795.mount: Deactivated successfully. Sep 9 23:52:08.764840 containerd[1536]: time="2025-09-09T23:52:08.764786060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:08.766613 containerd[1536]: time="2025-09-09T23:52:08.766577127Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 9 23:52:08.767756 containerd[1536]: time="2025-09-09T23:52:08.767706804Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:08.771707 containerd[1536]: time="2025-09-09T23:52:08.771645476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:08.773305 containerd[1536]: time="2025-09-09T23:52:08.773262369Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.737673518s" Sep 9 23:52:08.773351 containerd[1536]: time="2025-09-09T23:52:08.773307420Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 23:52:08.774840 containerd[1536]: time="2025-09-09T23:52:08.774532911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 23:52:09.984512 containerd[1536]: time="2025-09-09T23:52:09.984456320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.987629 containerd[1536]: time="2025-09-09T23:52:09.987574301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 9 23:52:09.988858 containerd[1536]: time="2025-09-09T23:52:09.988803623Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.992788 containerd[1536]: time="2025-09-09T23:52:09.992685272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:09.994221 containerd[1536]: time="2025-09-09T23:52:09.994191398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.219625177s" Sep 9 23:52:09.994309 containerd[1536]: time="2025-09-09T23:52:09.994294371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 23:52:09.994951 containerd[1536]: time="2025-09-09T23:52:09.994764339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 23:52:10.961944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 23:52:10.963359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:11.127935 containerd[1536]: time="2025-09-09T23:52:11.127707201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.128652 containerd[1536]: time="2025-09-09T23:52:11.128612424Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 9 23:52:11.129872 containerd[1536]: time="2025-09-09T23:52:11.129841799Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.132299 containerd[1536]: time="2025-09-09T23:52:11.132230157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:11.133738 containerd[1536]: time="2025-09-09T23:52:11.133704185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.138904294s" Sep 9 23:52:11.133738 containerd[1536]: time="2025-09-09T23:52:11.133741898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 23:52:11.135542 containerd[1536]: time="2025-09-09T23:52:11.135510595Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 23:52:11.146339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:11.151514 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:52:11.199165 kubelet[2039]: E0909 23:52:11.199099 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:52:11.202181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:52:11.202359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:52:11.202939 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.2M memory peak. Sep 9 23:52:12.104366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117442213.mount: Deactivated successfully. Sep 9 23:52:12.458566 containerd[1536]: time="2025-09-09T23:52:12.458426828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:12.459131 containerd[1536]: time="2025-09-09T23:52:12.459108705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 9 23:52:12.460242 containerd[1536]: time="2025-09-09T23:52:12.460217359Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:12.462531 containerd[1536]: time="2025-09-09T23:52:12.462468067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:12.463305 containerd[1536]: time="2025-09-09T23:52:12.463072555Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.327520651s" Sep 9 23:52:12.463305 containerd[1536]: time="2025-09-09T23:52:12.463110630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 23:52:12.463648 containerd[1536]: time="2025-09-09T23:52:12.463621149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 23:52:13.015860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246270400.mount: Deactivated successfully. Sep 9 23:52:13.671011 containerd[1536]: time="2025-09-09T23:52:13.670965638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:13.671476 containerd[1536]: time="2025-09-09T23:52:13.671401677Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 23:52:13.674264 containerd[1536]: time="2025-09-09T23:52:13.672504060Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:13.676040 containerd[1536]: time="2025-09-09T23:52:13.676006752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:13.677947 containerd[1536]: time="2025-09-09T23:52:13.677910449Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.214253581s" Sep 9 23:52:13.678146 containerd[1536]: time="2025-09-09T23:52:13.678123414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 23:52:13.678719 containerd[1536]: time="2025-09-09T23:52:13.678692306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 23:52:14.138317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379361845.mount: Deactivated successfully. Sep 9 23:52:14.144266 containerd[1536]: time="2025-09-09T23:52:14.143675004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:52:14.144266 containerd[1536]: time="2025-09-09T23:52:14.144089935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 23:52:14.145176 containerd[1536]: time="2025-09-09T23:52:14.145064086Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:52:14.147443 containerd[1536]: time="2025-09-09T23:52:14.147397990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:52:14.148725 containerd[1536]: time="2025-09-09T23:52:14.148677585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 469.845394ms" Sep 9 23:52:14.148725 containerd[1536]: time="2025-09-09T23:52:14.148716785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 23:52:14.149188 containerd[1536]: time="2025-09-09T23:52:14.149116211Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 23:52:14.657194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408704548.mount: Deactivated successfully. Sep 9 23:52:16.305406 containerd[1536]: time="2025-09-09T23:52:16.305342491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:16.372089 containerd[1536]: time="2025-09-09T23:52:16.372004686Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 9 23:52:16.373619 containerd[1536]: time="2025-09-09T23:52:16.373587805Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:16.377168 containerd[1536]: time="2025-09-09T23:52:16.377120432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:16.377882 containerd[1536]: time="2025-09-09T23:52:16.377809405Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.228660746s" Sep 9 23:52:16.377882 containerd[1536]: time="2025-09-09T23:52:16.377839937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 23:52:21.106777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:21.107199 systemd[1]: kubelet.service: Consumed 157ms CPU time, 108.2M memory peak. Sep 9 23:52:21.112284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:21.138935 systemd[1]: Reload requested from client PID 2197 ('systemctl') (unit session-7.scope)... Sep 9 23:52:21.138955 systemd[1]: Reloading... Sep 9 23:52:21.229490 zram_generator::config[2245]: No configuration found. Sep 9 23:52:21.425923 systemd[1]: Reloading finished in 286 ms. Sep 9 23:52:21.488074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:21.490522 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:52:21.490733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:21.490785 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95.1M memory peak. Sep 9 23:52:21.492264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:21.644227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:21.659824 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:52:21.700398 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:52:21.700398 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:52:21.700398 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:52:21.700746 kubelet[2287]: I0909 23:52:21.700378 2287 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:52:22.456507 kubelet[2287]: I0909 23:52:22.455652 2287 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:52:22.456507 kubelet[2287]: I0909 23:52:22.455686 2287 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:52:22.456507 kubelet[2287]: I0909 23:52:22.455993 2287 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:52:22.479673 kubelet[2287]: E0909 23:52:22.479638 2287 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:52:22.481072 kubelet[2287]: I0909 23:52:22.480714 2287 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:52:22.489091 kubelet[2287]: I0909 23:52:22.489060 2287 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:52:22.494452 kubelet[2287]: I0909 23:52:22.493625 2287 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:52:22.494452 kubelet[2287]: I0909 23:52:22.493999 2287 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:52:22.494452 kubelet[2287]: I0909 23:52:22.494122 2287 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:52:22.494585 kubelet[2287]: I0909 23:52:22.494146 2287 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:52:22.494663 kubelet[2287]: I0909 23:52:22.494649 2287 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:52:22.494663 kubelet[2287]: I0909 23:52:22.494660 2287 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:52:22.495116 kubelet[2287]: I0909 23:52:22.495073 2287 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:52:22.497284 kubelet[2287]: I0909 23:52:22.497245 2287 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:52:22.497284 kubelet[2287]: I0909 23:52:22.497274 2287 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:52:22.497374 kubelet[2287]: I0909 23:52:22.497298 2287 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:52:22.497394 kubelet[2287]: I0909 23:52:22.497375 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:52:22.499707 kubelet[2287]: W0909 23:52:22.499270 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Sep 9 23:52:22.499707 kubelet[2287]: E0909 23:52:22.499345 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:52:22.499933 kubelet[2287]: W0909 23:52:22.499828 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Sep 9 23:52:22.499933 kubelet[2287]: E0909 23:52:22.499883 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:52:22.501177 kubelet[2287]: I0909 23:52:22.501150 2287 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:52:22.502112 kubelet[2287]: I0909 23:52:22.502083 2287 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:52:22.502893 kubelet[2287]: W0909 23:52:22.502283 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:52:22.503343 kubelet[2287]: I0909 23:52:22.503316 2287 server.go:1274] "Started kubelet" Sep 9 23:52:22.505053 kubelet[2287]: I0909 23:52:22.504941 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:52:22.505350 kubelet[2287]: I0909 23:52:22.505255 2287 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:52:22.505401 kubelet[2287]: I0909 23:52:22.505380 2287 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:52:22.505952 kubelet[2287]: I0909 23:52:22.505919 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:52:22.506628 kubelet[2287]: I0909 23:52:22.506601 2287 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:52:22.508083 kubelet[2287]: I0909 23:52:22.508061 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:52:22.508201 kubelet[2287]: I0909 23:52:22.508169 2287 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:52:22.508689 kubelet[2287]: W0909 23:52:22.508645 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Sep 9 23:52:22.508741 kubelet[2287]: E0909 23:52:22.508692 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:52:22.510502 kubelet[2287]: I0909 23:52:22.508161 2287 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:52:22.510502 kubelet[2287]: I0909 23:52:22.509309 2287 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:52:22.510502 kubelet[2287]: E0909 23:52:22.509989 2287 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:52:22.510502 kubelet[2287]: E0909 23:52:22.510254 2287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:52:22.510641 kubelet[2287]: E0909 23:52:22.509457 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c253452d303f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 23:52:22.503288895 +0000 UTC m=+0.840093818,LastTimestamp:2025-09-09 23:52:22.503288895 +0000 UTC m=+0.840093818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 23:52:22.510861 kubelet[2287]: E0909 23:52:22.510822 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Sep 9 23:52:22.511041 kubelet[2287]: I0909 23:52:22.510945 2287 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:52:22.511218 kubelet[2287]: I0909 23:52:22.511197 2287 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:52:22.513035 kubelet[2287]: I0909 23:52:22.512961 2287 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:52:22.523963 kubelet[2287]: I0909 23:52:22.523907 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:52:22.525043 kubelet[2287]: I0909 23:52:22.524958 2287 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:52:22.525043 kubelet[2287]: I0909 23:52:22.524980 2287 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:52:22.525043 kubelet[2287]: I0909 23:52:22.524989 2287 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:52:22.525043 kubelet[2287]: I0909 23:52:22.525010 2287 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:52:22.525043 kubelet[2287]: I0909 23:52:22.525029 2287 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:52:22.525798 kubelet[2287]: W0909 23:52:22.525731 2287 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Sep 9 23:52:22.526045 kubelet[2287]: E0909 23:52:22.525796 2287 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Sep 9 23:52:22.526045 kubelet[2287]: I0909 23:52:22.524997 2287 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:52:22.526045 kubelet[2287]: E0909 23:52:22.526030 2287 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:52:22.610662 kubelet[2287]: E0909 23:52:22.610622 2287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:52:22.626932 kubelet[2287]: E0909 23:52:22.626898 2287 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 23:52:22.671105 kubelet[2287]: I0909 23:52:22.671048 2287 policy_none.go:49] "None policy: Start" Sep 9 23:52:22.672115 kubelet[2287]: I0909 23:52:22.672061 2287 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:52:22.672115 kubelet[2287]: I0909 23:52:22.672098 2287 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:52:22.681917 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:52:22.701703 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:52:22.704734 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:52:22.711917 kubelet[2287]: E0909 23:52:22.711777 2287 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:52:22.712396 kubelet[2287]: E0909 23:52:22.712138 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Sep 9 23:52:22.725600 kubelet[2287]: I0909 23:52:22.725490 2287 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:52:22.725722 kubelet[2287]: I0909 23:52:22.725697 2287 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:52:22.725759 kubelet[2287]: I0909 23:52:22.725720 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:52:22.725951 kubelet[2287]: I0909 23:52:22.725935 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:52:22.727346 kubelet[2287]: E0909 23:52:22.727316 2287 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 23:52:22.828976 kubelet[2287]: I0909 23:52:22.828796 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:52:22.829788 kubelet[2287]: E0909 23:52:22.829759 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Sep 9 23:52:22.835584 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 23:52:22.849966 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 23:52:22.852824 systemd[1]: Created slice kubepods-burstable-pod05143996b70b29c6e86a5d32385d86d9.slice - libcontainer container kubepods-burstable-pod05143996b70b29c6e86a5d32385d86d9.slice. Sep 9 23:52:22.911769 kubelet[2287]: I0909 23:52:22.911501 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:22.911769 kubelet[2287]: I0909 23:52:22.911545 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:22.911769 kubelet[2287]: I0909 23:52:22.911567 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:52:22.911769 kubelet[2287]: I0909 23:52:22.911582 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:22.911769 kubelet[2287]: I0909 23:52:22.911607 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:22.911969 kubelet[2287]: I0909 23:52:22.911626 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:22.911969 kubelet[2287]: I0909 23:52:22.911641 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:22.911969 kubelet[2287]: I0909 23:52:22.911673 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:22.911969 kubelet[2287]: I0909 23:52:22.911693 2287 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:23.031701 kubelet[2287]: I0909 23:52:23.031590 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:52:23.032161 kubelet[2287]: E0909 23:52:23.032133 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Sep 9 23:52:23.112796 kubelet[2287]: E0909 23:52:23.112754 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Sep 9 23:52:23.148704 containerd[1536]: time="2025-09-09T23:52:23.148664277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:23.152397 containerd[1536]: time="2025-09-09T23:52:23.152267831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:23.156257 containerd[1536]: time="2025-09-09T23:52:23.156225020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05143996b70b29c6e86a5d32385d86d9,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:23.182493 containerd[1536]: time="2025-09-09T23:52:23.182416180Z" level=info msg="connecting to shim e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898" address="unix:///run/containerd/s/97097baf928ac66811560de6fe092cba1de98c4df7f5ba93ecf8086620b7f725" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:23.196269 containerd[1536]: time="2025-09-09T23:52:23.196214312Z" level=info msg="connecting to shim 3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479" address="unix:///run/containerd/s/7511ed7b5d5518dbe3fbc8495bc6b9471394f17c59b7edc6efac8e54360b20d1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:23.197367 containerd[1536]: time="2025-09-09T23:52:23.197197623Z" level=info msg="connecting to shim 80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad" address="unix:///run/containerd/s/ce06d87e48c68581eddffef6f91437080c2f43efdf717feb48b504d92d1957db" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:23.228611 systemd[1]: Started cri-containerd-3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479.scope - libcontainer container 3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479. Sep 9 23:52:23.230351 systemd[1]: Started cri-containerd-80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad.scope - libcontainer container 80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad. Sep 9 23:52:23.234715 systemd[1]: Started cri-containerd-e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898.scope - libcontainer container e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898. Sep 9 23:52:23.274517 containerd[1536]: time="2025-09-09T23:52:23.274453064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05143996b70b29c6e86a5d32385d86d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479\"" Sep 9 23:52:23.279239 containerd[1536]: time="2025-09-09T23:52:23.278533422Z" level=info msg="CreateContainer within sandbox \"3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 23:52:23.284181 containerd[1536]: time="2025-09-09T23:52:23.284077413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad\"" Sep 9 23:52:23.286868 containerd[1536]: time="2025-09-09T23:52:23.286818826Z" level=info msg="CreateContainer within sandbox \"80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 23:52:23.291383 containerd[1536]: time="2025-09-09T23:52:23.291301271Z" level=info msg="Container 214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:23.295207 containerd[1536]: time="2025-09-09T23:52:23.295148804Z" level=info msg="Container 52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:23.296730 containerd[1536]: time="2025-09-09T23:52:23.296552471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898\"" Sep 9 23:52:23.300071 containerd[1536]: time="2025-09-09T23:52:23.299711643Z" level=info msg="CreateContainer within sandbox \"e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 23:52:23.301959 containerd[1536]: time="2025-09-09T23:52:23.301909091Z" level=info msg="CreateContainer within sandbox \"3b65e704fa0a7a7944ed8e8500adff0982423c5074a32ac90b3aba4173d1f479\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15\"" Sep 9 23:52:23.302648 containerd[1536]: time="2025-09-09T23:52:23.302614282Z" level=info msg="StartContainer for \"214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15\"" Sep 9 23:52:23.306365 containerd[1536]: time="2025-09-09T23:52:23.306311302Z" level=info msg="CreateContainer within sandbox \"80698113813b4a97b216280a1c77d75269d5b41a44a54c54e869f234ad73c2ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1\"" Sep 9 23:52:23.306966 containerd[1536]: time="2025-09-09T23:52:23.306908637Z" level=info msg="connecting to shim 214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15" address="unix:///run/containerd/s/7511ed7b5d5518dbe3fbc8495bc6b9471394f17c59b7edc6efac8e54360b20d1" protocol=ttrpc version=3 Sep 9 23:52:23.307713 containerd[1536]: time="2025-09-09T23:52:23.307686067Z" level=info msg="StartContainer for \"52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1\"" Sep 9 23:52:23.310458 containerd[1536]: time="2025-09-09T23:52:23.309060231Z" level=info msg="connecting to shim 52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1" address="unix:///run/containerd/s/ce06d87e48c68581eddffef6f91437080c2f43efdf717feb48b504d92d1957db" protocol=ttrpc version=3 Sep 9 23:52:23.317148 containerd[1536]: time="2025-09-09T23:52:23.317086385Z" level=info msg="Container 0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:23.325160 containerd[1536]: time="2025-09-09T23:52:23.325104984Z" level=info msg="CreateContainer within sandbox \"e603ab2d228a864b80e4ca40c1c89ca3f19bd14bf5ed7badc432ae32dbca8898\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a\"" Sep 9 23:52:23.325538 containerd[1536]: time="2025-09-09T23:52:23.325495038Z" level=info msg="StartContainer for \"0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a\"" Sep 9 23:52:23.327486 containerd[1536]: time="2025-09-09T23:52:23.327292797Z" level=info msg="connecting to shim 0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a" address="unix:///run/containerd/s/97097baf928ac66811560de6fe092cba1de98c4df7f5ba93ecf8086620b7f725" protocol=ttrpc version=3 Sep 9 23:52:23.327766 systemd[1]: Started cri-containerd-214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15.scope - libcontainer container 214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15. Sep 9 23:52:23.331283 systemd[1]: Started cri-containerd-52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1.scope - libcontainer container 52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1. Sep 9 23:52:23.354638 systemd[1]: Started cri-containerd-0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a.scope - libcontainer container 0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a. Sep 9 23:52:23.369661 containerd[1536]: time="2025-09-09T23:52:23.369618497Z" level=info msg="StartContainer for \"214be84933527917732e1e20d053e283f25b54fc82df921cfb2a31352a822f15\" returns successfully" Sep 9 23:52:23.406488 containerd[1536]: time="2025-09-09T23:52:23.406425511Z" level=info msg="StartContainer for \"0de88aacb72533f0e93504ead370c3af9aada296f0686343ea2a71ea2230dd3a\" returns successfully" Sep 9 23:52:23.409244 containerd[1536]: time="2025-09-09T23:52:23.409191510Z" level=info msg="StartContainer for \"52e827dd9762345e5d0f9e79500aed7a24c6cbfcc2ab072d8e5044a79d85e6d1\" returns successfully" Sep 9 23:52:23.435707 kubelet[2287]: I0909 23:52:23.435669 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:52:23.436897 kubelet[2287]: E0909 23:52:23.436853 2287 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Sep 9 23:52:24.239768 kubelet[2287]: I0909 23:52:24.239732 2287 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:52:24.810413 kubelet[2287]: E0909 23:52:24.810372 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 23:52:24.893722 kubelet[2287]: I0909 23:52:24.893686 2287 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 23:52:24.893722 kubelet[2287]: E0909 23:52:24.893720 2287 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 23:52:25.471882 kubelet[2287]: E0909 23:52:25.471842 2287 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 23:52:25.501134 kubelet[2287]: I0909 23:52:25.501102 2287 apiserver.go:52] "Watching apiserver" Sep 9 23:52:25.508633 kubelet[2287]: I0909 23:52:25.508608 2287 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:52:27.094265 systemd[1]: Reload requested from client PID 2560 ('systemctl') (unit session-7.scope)... Sep 9 23:52:27.094281 systemd[1]: Reloading... Sep 9 23:52:27.157563 zram_generator::config[2603]: No configuration found. Sep 9 23:52:27.330225 systemd[1]: Reloading finished in 235 ms. Sep 9 23:52:27.349939 kubelet[2287]: I0909 23:52:27.349805 2287 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:52:27.350136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:27.370555 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:52:27.370798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:27.370863 systemd[1]: kubelet.service: Consumed 1.193s CPU time, 127.1M memory peak. Sep 9 23:52:27.373736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:52:27.531261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:52:27.542785 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:52:27.593280 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:52:27.593280 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 23:52:27.593280 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:52:27.593640 kubelet[2645]: I0909 23:52:27.593327 2645 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:52:27.598915 kubelet[2645]: I0909 23:52:27.598871 2645 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 23:52:27.598915 kubelet[2645]: I0909 23:52:27.598906 2645 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:52:27.599166 kubelet[2645]: I0909 23:52:27.599138 2645 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 23:52:27.600544 kubelet[2645]: I0909 23:52:27.600480 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 23:52:27.602586 kubelet[2645]: I0909 23:52:27.602413 2645 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:52:27.606076 kubelet[2645]: I0909 23:52:27.606051 2645 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 23:52:27.609636 kubelet[2645]: I0909 23:52:27.609425 2645 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:52:27.609707 kubelet[2645]: I0909 23:52:27.609625 2645 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 23:52:27.609998 kubelet[2645]: I0909 23:52:27.609973 2645 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:52:27.610169 kubelet[2645]: I0909 23:52:27.610001 2645 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:52:27.610254 kubelet[2645]: I0909 23:52:27.610180 2645 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:52:27.610254 kubelet[2645]: I0909 23:52:27.610191 2645 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 23:52:27.610254 kubelet[2645]: I0909 23:52:27.610226 2645 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:52:27.610337 kubelet[2645]: I0909 23:52:27.610326 2645 kubelet.go:408] "Attempting to sync node with API server" Sep 9 23:52:27.610360 kubelet[2645]: I0909 23:52:27.610340 2645 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:52:27.610360 kubelet[2645]: I0909 23:52:27.610356 2645 kubelet.go:314] "Adding apiserver pod source" Sep 9 23:52:27.610460 kubelet[2645]: I0909 23:52:27.610368 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:52:27.611713 kubelet[2645]: I0909 23:52:27.611685 2645 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 23:52:27.612166 kubelet[2645]: I0909 23:52:27.612131 2645 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 23:52:27.612581 kubelet[2645]: I0909 23:52:27.612564 2645 server.go:1274] "Started kubelet" Sep 9 23:52:27.614042 kubelet[2645]: I0909 23:52:27.613901 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:52:27.614199 kubelet[2645]: I0909 23:52:27.614173 2645 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:52:27.614386 kubelet[2645]: I0909 23:52:27.614232 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:52:27.615937 kubelet[2645]: I0909 23:52:27.615901 2645 server.go:449] "Adding debug handlers to kubelet server" Sep 9 23:52:27.617168 kubelet[2645]: I0909 23:52:27.617011 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:52:27.617503 kubelet[2645]: I0909 23:52:27.617451 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:52:27.619467 kubelet[2645]: I0909 23:52:27.619086 2645 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 23:52:27.619467 kubelet[2645]: I0909 23:52:27.619192 2645 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 23:52:27.619467 kubelet[2645]: I0909 23:52:27.619294 2645 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:52:27.620337 kubelet[2645]: E0909 23:52:27.620076 2645 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 23:52:27.620788 kubelet[2645]: E0909 23:52:27.620651 2645 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:52:27.623440 kubelet[2645]: I0909 23:52:27.621733 2645 factory.go:221] Registration of the systemd container factory successfully Sep 9 23:52:27.623440 kubelet[2645]: I0909 23:52:27.621844 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:52:27.634998 kubelet[2645]: I0909 23:52:27.634953 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 23:52:27.636737 kubelet[2645]: I0909 23:52:27.636706 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 23:52:27.636737 kubelet[2645]: I0909 23:52:27.636737 2645 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 23:52:27.636839 kubelet[2645]: I0909 23:52:27.636756 2645 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 23:52:27.638477 kubelet[2645]: E0909 23:52:27.637765 2645 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 23:52:27.642680 kubelet[2645]: I0909 23:52:27.642648 2645 factory.go:221] Registration of the containerd container factory successfully Sep 9 23:52:27.676269 kubelet[2645]: I0909 23:52:27.676242 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 23:52:27.676498 kubelet[2645]: I0909 23:52:27.676480 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 23:52:27.676602 kubelet[2645]: I0909 23:52:27.676591 2645 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:52:27.676838 kubelet[2645]: I0909 23:52:27.676822 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 23:52:27.676922 kubelet[2645]: I0909 23:52:27.676896 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 23:52:27.676967 kubelet[2645]: I0909 23:52:27.676959 2645 policy_none.go:49] "None policy: Start" Sep 9 23:52:27.677653 kubelet[2645]: I0909 23:52:27.677631 2645 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 23:52:27.677701 kubelet[2645]: I0909 23:52:27.677660 2645 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:52:27.677853 kubelet[2645]: I0909 23:52:27.677836 2645 state_mem.go:75] "Updated machine memory state" Sep 9 23:52:27.682389 kubelet[2645]: I0909 23:52:27.682366 2645 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 23:52:27.682887 kubelet[2645]: I0909 23:52:27.682874 2645 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:52:27.683062 kubelet[2645]: I0909 23:52:27.683029 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:52:27.683412 kubelet[2645]: I0909 23:52:27.683395 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:52:27.755101 kubelet[2645]: E0909 23:52:27.755038 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:27.785451 kubelet[2645]: I0909 23:52:27.785415 2645 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 23:52:27.796662 kubelet[2645]: I0909 23:52:27.795386 2645 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 23:52:27.796662 kubelet[2645]: I0909 23:52:27.796025 2645 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 23:52:27.819940 kubelet[2645]: I0909 23:52:27.819880 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:27.819940 kubelet[2645]: I0909 23:52:27.819924 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:27.820125 kubelet[2645]: I0909 23:52:27.819958 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:27.820125 kubelet[2645]: I0909 23:52:27.819987 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:27.820125 kubelet[2645]: I0909 23:52:27.820019 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:27.820125 kubelet[2645]: I0909 23:52:27.820086 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:27.820125 kubelet[2645]: I0909 23:52:27.820116 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:27.820239 kubelet[2645]: I0909 23:52:27.820148 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 23:52:27.820239 kubelet[2645]: I0909 23:52:27.820166 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05143996b70b29c6e86a5d32385d86d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05143996b70b29c6e86a5d32385d86d9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 23:52:28.611011 kubelet[2645]: I0909 23:52:28.610965 2645 apiserver.go:52] "Watching apiserver" Sep 9 23:52:28.620189 kubelet[2645]: I0909 23:52:28.620132 2645 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 23:52:28.662822 kubelet[2645]: E0909 23:52:28.662580 2645 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 23:52:28.685139 kubelet[2645]: I0909 23:52:28.684636 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.684499337 podStartE2EDuration="1.684499337s" podCreationTimestamp="2025-09-09 23:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:28.684387544 +0000 UTC m=+1.137672433" watchObservedRunningTime="2025-09-09 23:52:28.684499337 +0000 UTC m=+1.137784147" Sep 9 23:52:28.702146 kubelet[2645]: I0909 23:52:28.702073 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.702054459 podStartE2EDuration="1.702054459s" podCreationTimestamp="2025-09-09 23:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:28.693303127 +0000 UTC m=+1.146587937" watchObservedRunningTime="2025-09-09 23:52:28.702054459 +0000 UTC m=+1.155339269" Sep 9 23:52:28.713013 kubelet[2645]: I0909 23:52:28.712958 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.712938537 podStartE2EDuration="2.712938537s" podCreationTimestamp="2025-09-09 23:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:28.702246018 +0000 UTC m=+1.155530828" watchObservedRunningTime="2025-09-09 23:52:28.712938537 +0000 UTC m=+1.166223347" Sep 9 23:52:33.752620 kubelet[2645]: I0909 23:52:33.752571 2645 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 23:52:33.753354 containerd[1536]: time="2025-09-09T23:52:33.753314046Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:52:33.753900 kubelet[2645]: I0909 23:52:33.753567 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 23:52:34.096879 systemd[1]: Created slice kubepods-besteffort-podba63ccbb_ec35_4f8d_8282_0ed58e10f185.slice - libcontainer container kubepods-besteffort-podba63ccbb_ec35_4f8d_8282_0ed58e10f185.slice. Sep 9 23:52:34.155269 kubelet[2645]: I0909 23:52:34.155179 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-kube-proxy\") pod \"kube-proxy-pcms5\" (UID: \"ba63ccbb-ec35-4f8d-8282-0ed58e10f185\") " pod="kube-system/kube-proxy-pcms5" Sep 9 23:52:34.155269 kubelet[2645]: I0909 23:52:34.155225 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-lib-modules\") pod \"kube-proxy-pcms5\" (UID: \"ba63ccbb-ec35-4f8d-8282-0ed58e10f185\") " pod="kube-system/kube-proxy-pcms5" Sep 9 23:52:34.155269 kubelet[2645]: I0909 23:52:34.155246 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-xtables-lock\") pod \"kube-proxy-pcms5\" (UID: \"ba63ccbb-ec35-4f8d-8282-0ed58e10f185\") " pod="kube-system/kube-proxy-pcms5" Sep 9 23:52:34.155269 kubelet[2645]: I0909 23:52:34.155261 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzn7\" (UniqueName: \"kubernetes.io/projected/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-kube-api-access-mxzn7\") pod \"kube-proxy-pcms5\" (UID: \"ba63ccbb-ec35-4f8d-8282-0ed58e10f185\") " pod="kube-system/kube-proxy-pcms5" Sep 9 23:52:34.266165 kubelet[2645]: E0909 23:52:34.266111 2645 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 23:52:34.266165 kubelet[2645]: E0909 23:52:34.266145 2645 projected.go:194] Error preparing data for projected volume kube-api-access-mxzn7 for pod kube-system/kube-proxy-pcms5: configmap "kube-root-ca.crt" not found Sep 9 23:52:34.266316 kubelet[2645]: E0909 23:52:34.266260 2645 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-kube-api-access-mxzn7 podName:ba63ccbb-ec35-4f8d-8282-0ed58e10f185 nodeName:}" failed. No retries permitted until 2025-09-09 23:52:34.766236596 +0000 UTC m=+7.219521406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mxzn7" (UniqueName: "kubernetes.io/projected/ba63ccbb-ec35-4f8d-8282-0ed58e10f185-kube-api-access-mxzn7") pod "kube-proxy-pcms5" (UID: "ba63ccbb-ec35-4f8d-8282-0ed58e10f185") : configmap "kube-root-ca.crt" not found Sep 9 23:52:34.882228 systemd[1]: Created slice kubepods-besteffort-pod4ed6148e_9b85_40ae_b2ed_43f79e8f69bf.slice - libcontainer container kubepods-besteffort-pod4ed6148e_9b85_40ae_b2ed_43f79e8f69bf.slice. Sep 9 23:52:35.011004 containerd[1536]: time="2025-09-09T23:52:35.010939876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcms5,Uid:ba63ccbb-ec35-4f8d-8282-0ed58e10f185,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:35.032677 containerd[1536]: time="2025-09-09T23:52:35.032623412Z" level=info msg="connecting to shim 767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6" address="unix:///run/containerd/s/275ac86203c0ade74226bf1e3768aae879f37898450012d8260ccc7d40e02bb3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:35.061788 kubelet[2645]: I0909 23:52:35.061735 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ed6148e-9b85-40ae-b2ed-43f79e8f69bf-var-lib-calico\") pod \"tigera-operator-58fc44c59b-6wdcd\" (UID: \"4ed6148e-9b85-40ae-b2ed-43f79e8f69bf\") " pod="tigera-operator/tigera-operator-58fc44c59b-6wdcd" Sep 9 23:52:35.061788 kubelet[2645]: I0909 23:52:35.061784 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56xr\" (UniqueName: \"kubernetes.io/projected/4ed6148e-9b85-40ae-b2ed-43f79e8f69bf-kube-api-access-f56xr\") pod \"tigera-operator-58fc44c59b-6wdcd\" (UID: \"4ed6148e-9b85-40ae-b2ed-43f79e8f69bf\") " pod="tigera-operator/tigera-operator-58fc44c59b-6wdcd" Sep 9 23:52:35.066686 systemd[1]: Started cri-containerd-767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6.scope - libcontainer container 767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6. Sep 9 23:52:35.090513 containerd[1536]: time="2025-09-09T23:52:35.090452867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pcms5,Uid:ba63ccbb-ec35-4f8d-8282-0ed58e10f185,Namespace:kube-system,Attempt:0,} returns sandbox id \"767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6\"" Sep 9 23:52:35.095418 containerd[1536]: time="2025-09-09T23:52:35.095366156Z" level=info msg="CreateContainer within sandbox \"767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:52:35.108224 containerd[1536]: time="2025-09-09T23:52:35.106941588Z" level=info msg="Container 73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:35.119631 containerd[1536]: time="2025-09-09T23:52:35.119582057Z" level=info msg="CreateContainer within sandbox \"767dcbfb2f67c9771404cd9ec5424c7aef0a18a7dad83304b79cf8f646174aa6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf\"" Sep 9 23:52:35.120506 containerd[1536]: time="2025-09-09T23:52:35.120464781Z" level=info msg="StartContainer for \"73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf\"" Sep 9 23:52:35.122180 containerd[1536]: time="2025-09-09T23:52:35.122109702Z" level=info msg="connecting to shim 73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf" address="unix:///run/containerd/s/275ac86203c0ade74226bf1e3768aae879f37898450012d8260ccc7d40e02bb3" protocol=ttrpc version=3 Sep 9 23:52:35.146689 systemd[1]: Started cri-containerd-73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf.scope - libcontainer container 73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf. Sep 9 23:52:35.187213 containerd[1536]: time="2025-09-09T23:52:35.187144595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6wdcd,Uid:4ed6148e-9b85-40ae-b2ed-43f79e8f69bf,Namespace:tigera-operator,Attempt:0,}" Sep 9 23:52:35.207038 containerd[1536]: time="2025-09-09T23:52:35.206987902Z" level=info msg="StartContainer for \"73995306d097d243ba388f9e123d71ba4f932076a4a98ea364db89814d97a4bf\" returns successfully" Sep 9 23:52:35.265917 containerd[1536]: time="2025-09-09T23:52:35.265861078Z" level=info msg="connecting to shim a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858" address="unix:///run/containerd/s/3d8c320410873ce5eee84770b52bdd3e601204c836d10d2a5c9b10a3e7b7e18a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:35.292724 systemd[1]: Started cri-containerd-a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858.scope - libcontainer container a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858. Sep 9 23:52:35.333371 containerd[1536]: time="2025-09-09T23:52:35.333317725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6wdcd,Uid:4ed6148e-9b85-40ae-b2ed-43f79e8f69bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858\"" Sep 9 23:52:35.336194 containerd[1536]: time="2025-09-09T23:52:35.336107660Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 23:52:35.684504 kubelet[2645]: I0909 23:52:35.684441 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pcms5" podStartSLOduration=1.684412633 podStartE2EDuration="1.684412633s" podCreationTimestamp="2025-09-09 23:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:52:35.683813673 +0000 UTC m=+8.137098523" watchObservedRunningTime="2025-09-09 23:52:35.684412633 +0000 UTC m=+8.137697403" Sep 9 23:52:36.478252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257399002.mount: Deactivated successfully. Sep 9 23:52:37.345808 containerd[1536]: time="2025-09-09T23:52:37.345078223Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.345808 containerd[1536]: time="2025-09-09T23:52:37.345654128Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 23:52:37.346551 containerd[1536]: time="2025-09-09T23:52:37.346514286Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.349148 containerd[1536]: time="2025-09-09T23:52:37.349104919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:37.350073 containerd[1536]: time="2025-09-09T23:52:37.350042179Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.013867457s" Sep 9 23:52:37.350267 containerd[1536]: time="2025-09-09T23:52:37.350076411Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 23:52:37.353072 containerd[1536]: time="2025-09-09T23:52:37.352511960Z" level=info msg="CreateContainer within sandbox \"a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 23:52:37.360845 containerd[1536]: time="2025-09-09T23:52:37.360193440Z" level=info msg="Container 2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:37.366745 containerd[1536]: time="2025-09-09T23:52:37.366699394Z" level=info msg="CreateContainer within sandbox \"a4ae7350deca87f7e323a504f13fe97848395986d11d65c3378178320a7d3858\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9\"" Sep 9 23:52:37.367394 containerd[1536]: time="2025-09-09T23:52:37.367225831Z" level=info msg="StartContainer for \"2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9\"" Sep 9 23:52:37.368302 containerd[1536]: time="2025-09-09T23:52:37.368271306Z" level=info msg="connecting to shim 2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9" address="unix:///run/containerd/s/3d8c320410873ce5eee84770b52bdd3e601204c836d10d2a5c9b10a3e7b7e18a" protocol=ttrpc version=3 Sep 9 23:52:37.403677 systemd[1]: Started cri-containerd-2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9.scope - libcontainer container 2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9. Sep 9 23:52:37.446088 containerd[1536]: time="2025-09-09T23:52:37.445963052Z" level=info msg="StartContainer for \"2192a33519012eee182dc6d68bbfe9254cecd155475b2131223ab93565c1ebb9\" returns successfully" Sep 9 23:52:39.165904 kubelet[2645]: I0909 23:52:39.165817 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-6wdcd" podStartSLOduration=3.149779318 podStartE2EDuration="5.165798997s" podCreationTimestamp="2025-09-09 23:52:34 +0000 UTC" firstStartedPulling="2025-09-09 23:52:35.335130601 +0000 UTC m=+7.788415411" lastFinishedPulling="2025-09-09 23:52:37.35115028 +0000 UTC m=+9.804435090" observedRunningTime="2025-09-09 23:52:37.694430004 +0000 UTC m=+10.147714814" watchObservedRunningTime="2025-09-09 23:52:39.165798997 +0000 UTC m=+11.619083807" Sep 9 23:52:43.139591 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 9 23:52:43.141508 sshd[1733]: Connection closed by 10.0.0.1 port 46414 Sep 9 23:52:43.143127 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 9 23:52:43.147389 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:46414.service: Deactivated successfully. Sep 9 23:52:43.150137 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:52:43.150332 systemd[1]: session-7.scope: Consumed 6.565s CPU time, 223.7M memory peak. Sep 9 23:52:43.153547 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:52:43.155512 systemd-logind[1506]: Removed session 7. Sep 9 23:52:43.482665 update_engine[1508]: I20250909 23:52:43.482582 1508 update_attempter.cc:509] Updating boot flags... Sep 9 23:52:50.449870 systemd[1]: Created slice kubepods-besteffort-podb1a0f43d_14a8_456c_bc1a_bc6b91c35ea2.slice - libcontainer container kubepods-besteffort-podb1a0f43d_14a8_456c_bc1a_bc6b91c35ea2.slice. Sep 9 23:52:50.558244 kubelet[2645]: I0909 23:52:50.558199 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4ns\" (UniqueName: \"kubernetes.io/projected/b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2-kube-api-access-hl4ns\") pod \"calico-typha-7b97b4db5-bs26w\" (UID: \"b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2\") " pod="calico-system/calico-typha-7b97b4db5-bs26w" Sep 9 23:52:50.558244 kubelet[2645]: I0909 23:52:50.558246 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2-typha-certs\") pod \"calico-typha-7b97b4db5-bs26w\" (UID: \"b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2\") " pod="calico-system/calico-typha-7b97b4db5-bs26w" Sep 9 23:52:50.558672 kubelet[2645]: I0909 23:52:50.558266 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2-tigera-ca-bundle\") pod \"calico-typha-7b97b4db5-bs26w\" (UID: \"b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2\") " pod="calico-system/calico-typha-7b97b4db5-bs26w" Sep 9 23:52:50.755313 containerd[1536]: time="2025-09-09T23:52:50.755240506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b97b4db5-bs26w,Uid:b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:50.817630 systemd[1]: Created slice kubepods-besteffort-pod5d6cb43e_5bec_4e47_acbf_246eedde2631.slice - libcontainer container kubepods-besteffort-pod5d6cb43e_5bec_4e47_acbf_246eedde2631.slice. Sep 9 23:52:50.826674 containerd[1536]: time="2025-09-09T23:52:50.826634714Z" level=info msg="connecting to shim e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa" address="unix:///run/containerd/s/da223173a1a5a4c945e751db688875df3d2eae57dc8c9b8e550c5e6567b22600" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:50.901658 systemd[1]: Started cri-containerd-e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa.scope - libcontainer container e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa. Sep 9 23:52:50.961836 kubelet[2645]: I0909 23:52:50.961784 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-lib-modules\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.961836 kubelet[2645]: I0909 23:52:50.961834 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5d6cb43e-5bec-4e47-acbf-246eedde2631-node-certs\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962012 kubelet[2645]: I0909 23:52:50.961854 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-cni-bin-dir\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962012 kubelet[2645]: I0909 23:52:50.961870 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-policysync\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962012 kubelet[2645]: I0909 23:52:50.961890 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-cni-log-dir\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962012 kubelet[2645]: I0909 23:52:50.961905 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-var-run-calico\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962012 kubelet[2645]: I0909 23:52:50.961922 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-cni-net-dir\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962179 kubelet[2645]: I0909 23:52:50.961939 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-flexvol-driver-host\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962179 kubelet[2645]: I0909 23:52:50.961956 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d6cb43e-5bec-4e47-acbf-246eedde2631-tigera-ca-bundle\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962179 kubelet[2645]: I0909 23:52:50.961972 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-var-lib-calico\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962179 kubelet[2645]: I0909 23:52:50.961989 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd294\" (UniqueName: \"kubernetes.io/projected/5d6cb43e-5bec-4e47-acbf-246eedde2631-kube-api-access-pd294\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.962179 kubelet[2645]: I0909 23:52:50.962004 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d6cb43e-5bec-4e47-acbf-246eedde2631-xtables-lock\") pod \"calico-node-s42bf\" (UID: \"5d6cb43e-5bec-4e47-acbf-246eedde2631\") " pod="calico-system/calico-node-s42bf" Sep 9 23:52:50.969681 containerd[1536]: time="2025-09-09T23:52:50.969168116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b97b4db5-bs26w,Uid:b1a0f43d-14a8-456c-bc1a-bc6b91c35ea2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa\"" Sep 9 23:52:50.978268 containerd[1536]: time="2025-09-09T23:52:50.978213440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 23:52:51.059196 kubelet[2645]: E0909 23:52:51.058711 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvndz" podUID="b851fe3b-9228-49ef-96c2-0523568005b4" Sep 9 23:52:51.062521 kubelet[2645]: I0909 23:52:51.062481 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b851fe3b-9228-49ef-96c2-0523568005b4-socket-dir\") pod \"csi-node-driver-vvndz\" (UID: \"b851fe3b-9228-49ef-96c2-0523568005b4\") " pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:51.062521 kubelet[2645]: I0909 23:52:51.062516 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjw5\" (UniqueName: \"kubernetes.io/projected/b851fe3b-9228-49ef-96c2-0523568005b4-kube-api-access-9xjw5\") pod \"csi-node-driver-vvndz\" (UID: \"b851fe3b-9228-49ef-96c2-0523568005b4\") " pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:51.062640 kubelet[2645]: I0909 23:52:51.062554 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b851fe3b-9228-49ef-96c2-0523568005b4-varrun\") pod \"csi-node-driver-vvndz\" (UID: \"b851fe3b-9228-49ef-96c2-0523568005b4\") " pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:51.062640 kubelet[2645]: I0909 23:52:51.062595 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b851fe3b-9228-49ef-96c2-0523568005b4-kubelet-dir\") pod \"csi-node-driver-vvndz\" (UID: \"b851fe3b-9228-49ef-96c2-0523568005b4\") " pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:51.062640 kubelet[2645]: I0909 23:52:51.062610 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b851fe3b-9228-49ef-96c2-0523568005b4-registration-dir\") pod \"csi-node-driver-vvndz\" (UID: \"b851fe3b-9228-49ef-96c2-0523568005b4\") " pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:51.064015 kubelet[2645]: E0909 23:52:51.063984 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.064088 kubelet[2645]: W0909 23:52:51.064005 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.064570 kubelet[2645]: E0909 23:52:51.064547 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.064570 kubelet[2645]: W0909 23:52:51.064566 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.064725 kubelet[2645]: E0909 23:52:51.064582 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.064984 kubelet[2645]: E0909 23:52:51.064785 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.065141 kubelet[2645]: E0909 23:52:51.065104 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.065141 kubelet[2645]: W0909 23:52:51.065119 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.065207 kubelet[2645]: E0909 23:52:51.065135 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.065399 kubelet[2645]: E0909 23:52:51.065384 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.065399 kubelet[2645]: W0909 23:52:51.065396 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.065493 kubelet[2645]: E0909 23:52:51.065412 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.065627 kubelet[2645]: E0909 23:52:51.065611 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.065627 kubelet[2645]: W0909 23:52:51.065623 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.065708 kubelet[2645]: E0909 23:52:51.065637 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.066610 kubelet[2645]: E0909 23:52:51.066586 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.066610 kubelet[2645]: W0909 23:52:51.066606 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.066743 kubelet[2645]: E0909 23:52:51.066627 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.067769 kubelet[2645]: E0909 23:52:51.067745 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.067769 kubelet[2645]: W0909 23:52:51.067764 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.068287 kubelet[2645]: E0909 23:52:51.068263 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.069561 kubelet[2645]: E0909 23:52:51.069421 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.069561 kubelet[2645]: W0909 23:52:51.069453 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.069561 kubelet[2645]: E0909 23:52:51.069469 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.070894 kubelet[2645]: E0909 23:52:51.070682 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.070894 kubelet[2645]: W0909 23:52:51.070710 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.070894 kubelet[2645]: E0909 23:52:51.070733 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.072178 kubelet[2645]: E0909 23:52:51.071621 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.072178 kubelet[2645]: W0909 23:52:51.071638 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.072178 kubelet[2645]: E0909 23:52:51.071651 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.084975 kubelet[2645]: E0909 23:52:51.084947 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.084975 kubelet[2645]: W0909 23:52:51.084968 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.085123 kubelet[2645]: E0909 23:52:51.084989 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.121361 containerd[1536]: time="2025-09-09T23:52:51.121312383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s42bf,Uid:5d6cb43e-5bec-4e47-acbf-246eedde2631,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:51.156189 containerd[1536]: time="2025-09-09T23:52:51.156127677Z" level=info msg="connecting to shim 783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2" address="unix:///run/containerd/s/2ae39db8eb963b5f8240364fd57a54591423b64d779aef052c2c610d24e8ecfe" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:52:51.163962 kubelet[2645]: E0909 23:52:51.163910 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.163962 kubelet[2645]: W0909 23:52:51.163938 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.163962 kubelet[2645]: E0909 23:52:51.163971 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.164849 kubelet[2645]: E0909 23:52:51.164817 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.164849 kubelet[2645]: W0909 23:52:51.164837 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.164849 kubelet[2645]: E0909 23:52:51.164858 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.165202 kubelet[2645]: E0909 23:52:51.165186 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.165367 kubelet[2645]: W0909 23:52:51.165255 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.165367 kubelet[2645]: E0909 23:52:51.165282 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.165534 kubelet[2645]: E0909 23:52:51.165522 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.165591 kubelet[2645]: W0909 23:52:51.165580 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.165646 kubelet[2645]: E0909 23:52:51.165636 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.165875 kubelet[2645]: E0909 23:52:51.165858 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.165913 kubelet[2645]: W0909 23:52:51.165875 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.165941 kubelet[2645]: E0909 23:52:51.165918 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.166180 kubelet[2645]: E0909 23:52:51.166167 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.166215 kubelet[2645]: W0909 23:52:51.166181 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.166241 kubelet[2645]: E0909 23:52:51.166197 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.166415 kubelet[2645]: E0909 23:52:51.166402 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.166462 kubelet[2645]: W0909 23:52:51.166415 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.166511 kubelet[2645]: E0909 23:52:51.166496 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.166670 kubelet[2645]: E0909 23:52:51.166658 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.166712 kubelet[2645]: W0909 23:52:51.166669 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.166712 kubelet[2645]: E0909 23:52:51.166707 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.167080 kubelet[2645]: E0909 23:52:51.167066 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.167080 kubelet[2645]: W0909 23:52:51.167080 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.167185 kubelet[2645]: E0909 23:52:51.167157 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.167503 kubelet[2645]: E0909 23:52:51.167487 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.167503 kubelet[2645]: W0909 23:52:51.167502 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.167613 kubelet[2645]: E0909 23:52:51.167587 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.167723 kubelet[2645]: E0909 23:52:51.167706 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.167723 kubelet[2645]: W0909 23:52:51.167721 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.167852 kubelet[2645]: E0909 23:52:51.167803 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.168038 kubelet[2645]: E0909 23:52:51.168022 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.168038 kubelet[2645]: W0909 23:52:51.168037 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.168102 kubelet[2645]: E0909 23:52:51.168087 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.168330 kubelet[2645]: E0909 23:52:51.168316 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.168330 kubelet[2645]: W0909 23:52:51.168330 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.168390 kubelet[2645]: E0909 23:52:51.168354 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.168578 kubelet[2645]: E0909 23:52:51.168564 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.168578 kubelet[2645]: W0909 23:52:51.168576 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.168645 kubelet[2645]: E0909 23:52:51.168591 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.168947 kubelet[2645]: E0909 23:52:51.168911 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.168947 kubelet[2645]: W0909 23:52:51.168941 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.169009 kubelet[2645]: E0909 23:52:51.168987 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.169252 kubelet[2645]: E0909 23:52:51.169237 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.169252 kubelet[2645]: W0909 23:52:51.169251 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.169321 kubelet[2645]: E0909 23:52:51.169307 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.169763 kubelet[2645]: E0909 23:52:51.169748 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.169810 kubelet[2645]: W0909 23:52:51.169762 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.169840 kubelet[2645]: E0909 23:52:51.169818 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.170103 kubelet[2645]: E0909 23:52:51.170090 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.170151 kubelet[2645]: W0909 23:52:51.170103 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.170230 kubelet[2645]: E0909 23:52:51.170195 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.170386 kubelet[2645]: E0909 23:52:51.170372 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.170386 kubelet[2645]: W0909 23:52:51.170385 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.170586 kubelet[2645]: E0909 23:52:51.170479 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.170586 kubelet[2645]: E0909 23:52:51.170580 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.170586 kubelet[2645]: W0909 23:52:51.170590 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.170671 kubelet[2645]: E0909 23:52:51.170607 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.170821 kubelet[2645]: E0909 23:52:51.170801 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.170866 kubelet[2645]: W0909 23:52:51.170814 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.170866 kubelet[2645]: E0909 23:52:51.170852 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.171262 kubelet[2645]: E0909 23:52:51.171233 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.171262 kubelet[2645]: W0909 23:52:51.171246 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.171262 kubelet[2645]: E0909 23:52:51.171259 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.171647 kubelet[2645]: E0909 23:52:51.171627 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.171647 kubelet[2645]: W0909 23:52:51.171643 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.172004 kubelet[2645]: E0909 23:52:51.171730 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.172004 kubelet[2645]: E0909 23:52:51.171861 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.172004 kubelet[2645]: W0909 23:52:51.171873 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.172004 kubelet[2645]: E0909 23:52:51.171883 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.173164 kubelet[2645]: E0909 23:52:51.173140 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.173164 kubelet[2645]: W0909 23:52:51.173160 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.173309 kubelet[2645]: E0909 23:52:51.173176 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.183635 systemd[1]: Started cri-containerd-783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2.scope - libcontainer container 783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2. Sep 9 23:52:51.184759 kubelet[2645]: E0909 23:52:51.184511 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:51.184759 kubelet[2645]: W0909 23:52:51.184533 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:51.184759 kubelet[2645]: E0909 23:52:51.184552 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:51.262083 containerd[1536]: time="2025-09-09T23:52:51.261373323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s42bf,Uid:5d6cb43e-5bec-4e47-acbf-246eedde2631,Namespace:calico-system,Attempt:0,} returns sandbox id \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\"" Sep 9 23:52:51.868221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697433560.mount: Deactivated successfully. Sep 9 23:52:52.637257 kubelet[2645]: E0909 23:52:52.637207 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvndz" podUID="b851fe3b-9228-49ef-96c2-0523568005b4" Sep 9 23:52:52.994608 containerd[1536]: time="2025-09-09T23:52:52.994549990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:52.996252 containerd[1536]: time="2025-09-09T23:52:52.996214242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 23:52:52.997086 containerd[1536]: time="2025-09-09T23:52:52.997048647Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:53.000024 containerd[1536]: time="2025-09-09T23:52:52.999962028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:53.000471 containerd[1536]: time="2025-09-09T23:52:53.000413308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.022164911s" Sep 9 23:52:53.000535 containerd[1536]: time="2025-09-09T23:52:53.000471663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 23:52:53.001647 containerd[1536]: time="2025-09-09T23:52:53.001612882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 23:52:53.022770 containerd[1536]: time="2025-09-09T23:52:53.022730559Z" level=info msg="CreateContainer within sandbox \"e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 23:52:53.031167 containerd[1536]: time="2025-09-09T23:52:53.031028907Z" level=info msg="Container 8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:53.050715 containerd[1536]: time="2025-09-09T23:52:53.050651869Z" level=info msg="CreateContainer within sandbox \"e270d6d6f1e3cc8e606943218d54ad17243c9c6f9dd661ad12bac0bbb3e145aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90\"" Sep 9 23:52:53.051294 containerd[1536]: time="2025-09-09T23:52:53.051256898Z" level=info msg="StartContainer for \"8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90\"" Sep 9 23:52:53.052617 containerd[1536]: time="2025-09-09T23:52:53.052530912Z" level=info msg="connecting to shim 8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90" address="unix:///run/containerd/s/da223173a1a5a4c945e751db688875df3d2eae57dc8c9b8e550c5e6567b22600" protocol=ttrpc version=3 Sep 9 23:52:53.087911 systemd[1]: Started cri-containerd-8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90.scope - libcontainer container 8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90. Sep 9 23:52:53.137779 containerd[1536]: time="2025-09-09T23:52:53.137676326Z" level=info msg="StartContainer for \"8217e96b01bec1c38ba170eaca8bc85c311b864bb5d9f17585c7269486d89f90\" returns successfully" Sep 9 23:52:53.743010 kubelet[2645]: I0909 23:52:53.742896 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b97b4db5-bs26w" podStartSLOduration=1.717545691 podStartE2EDuration="3.742877494s" podCreationTimestamp="2025-09-09 23:52:50 +0000 UTC" firstStartedPulling="2025-09-09 23:52:50.976097014 +0000 UTC m=+23.429381824" lastFinishedPulling="2025-09-09 23:52:53.001428817 +0000 UTC m=+25.454713627" observedRunningTime="2025-09-09 23:52:53.741358781 +0000 UTC m=+26.194643591" watchObservedRunningTime="2025-09-09 23:52:53.742877494 +0000 UTC m=+26.196162344" Sep 9 23:52:53.780880 kubelet[2645]: E0909 23:52:53.780753 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.781007 kubelet[2645]: W0909 23:52:53.780919 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.781007 kubelet[2645]: E0909 23:52:53.780942 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.781530 kubelet[2645]: E0909 23:52:53.781295 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.781530 kubelet[2645]: W0909 23:52:53.781322 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.781530 kubelet[2645]: E0909 23:52:53.781334 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.781823 kubelet[2645]: E0909 23:52:53.781783 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.781823 kubelet[2645]: W0909 23:52:53.781824 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.781882 kubelet[2645]: E0909 23:52:53.781836 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.782097 kubelet[2645]: E0909 23:52:53.782084 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.782129 kubelet[2645]: W0909 23:52:53.782114 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.782129 kubelet[2645]: E0909 23:52:53.782126 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.782393 kubelet[2645]: E0909 23:52:53.782380 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.782429 kubelet[2645]: W0909 23:52:53.782393 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.782429 kubelet[2645]: E0909 23:52:53.782405 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.782638 kubelet[2645]: E0909 23:52:53.782623 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.782638 kubelet[2645]: W0909 23:52:53.782635 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.782808 kubelet[2645]: E0909 23:52:53.782716 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.782930 kubelet[2645]: E0909 23:52:53.782916 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.782930 kubelet[2645]: W0909 23:52:53.782928 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.782974 kubelet[2645]: E0909 23:52:53.782938 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.783167 kubelet[2645]: E0909 23:52:53.783153 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.783167 kubelet[2645]: W0909 23:52:53.783166 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.783215 kubelet[2645]: E0909 23:52:53.783175 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.783494 kubelet[2645]: E0909 23:52:53.783480 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.783494 kubelet[2645]: W0909 23:52:53.783493 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.783554 kubelet[2645]: E0909 23:52:53.783504 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.783782 kubelet[2645]: E0909 23:52:53.783767 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.783782 kubelet[2645]: W0909 23:52:53.783781 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.783833 kubelet[2645]: E0909 23:52:53.783791 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.784046 kubelet[2645]: E0909 23:52:53.783984 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.784082 kubelet[2645]: W0909 23:52:53.784048 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.784107 kubelet[2645]: E0909 23:52:53.784083 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.784336 kubelet[2645]: E0909 23:52:53.784322 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.784376 kubelet[2645]: W0909 23:52:53.784339 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.784411 kubelet[2645]: E0909 23:52:53.784379 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.784624 kubelet[2645]: E0909 23:52:53.784609 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.784624 kubelet[2645]: W0909 23:52:53.784623 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.784692 kubelet[2645]: E0909 23:52:53.784675 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.784883 kubelet[2645]: E0909 23:52:53.784870 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.784883 kubelet[2645]: W0909 23:52:53.784882 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.784941 kubelet[2645]: E0909 23:52:53.784892 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.785163 kubelet[2645]: E0909 23:52:53.785147 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.785163 kubelet[2645]: W0909 23:52:53.785160 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.785218 kubelet[2645]: E0909 23:52:53.785170 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.790607 kubelet[2645]: E0909 23:52:53.790565 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.790607 kubelet[2645]: W0909 23:52:53.790600 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.790704 kubelet[2645]: E0909 23:52:53.790613 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.790812 kubelet[2645]: E0909 23:52:53.790796 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.790812 kubelet[2645]: W0909 23:52:53.790808 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.790874 kubelet[2645]: E0909 23:52:53.790821 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.791061 kubelet[2645]: E0909 23:52:53.791047 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.791061 kubelet[2645]: W0909 23:52:53.791058 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.791158 kubelet[2645]: E0909 23:52:53.791077 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.791324 kubelet[2645]: E0909 23:52:53.791310 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.791472 kubelet[2645]: W0909 23:52:53.791321 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.791472 kubelet[2645]: E0909 23:52:53.791348 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.791856 kubelet[2645]: E0909 23:52:53.791724 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.791856 kubelet[2645]: W0909 23:52:53.791744 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.791856 kubelet[2645]: E0909 23:52:53.791764 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.792016 kubelet[2645]: E0909 23:52:53.792001 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.792102 kubelet[2645]: W0909 23:52:53.792088 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.792186 kubelet[2645]: E0909 23:52:53.792173 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.792466 kubelet[2645]: E0909 23:52:53.792450 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.792554 kubelet[2645]: W0909 23:52:53.792540 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.792663 kubelet[2645]: E0909 23:52:53.792639 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.792874 kubelet[2645]: E0909 23:52:53.792857 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.792952 kubelet[2645]: W0909 23:52:53.792939 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.793166 kubelet[2645]: E0909 23:52:53.793148 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.793488 kubelet[2645]: E0909 23:52:53.793470 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.793598 kubelet[2645]: W0909 23:52:53.793583 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.793706 kubelet[2645]: E0909 23:52:53.793688 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.793983 kubelet[2645]: E0909 23:52:53.793968 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.794213 kubelet[2645]: W0909 23:52:53.794106 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.794213 kubelet[2645]: E0909 23:52:53.794139 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.794478 kubelet[2645]: E0909 23:52:53.794401 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.794684 kubelet[2645]: W0909 23:52:53.794414 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.794684 kubelet[2645]: E0909 23:52:53.794591 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.794879 kubelet[2645]: E0909 23:52:53.794863 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.795078 kubelet[2645]: W0909 23:52:53.794936 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.795078 kubelet[2645]: E0909 23:52:53.794960 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.795370 kubelet[2645]: E0909 23:52:53.795258 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.795370 kubelet[2645]: W0909 23:52:53.795272 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.795370 kubelet[2645]: E0909 23:52:53.795301 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.795717 kubelet[2645]: E0909 23:52:53.795604 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.795717 kubelet[2645]: W0909 23:52:53.795618 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.795717 kubelet[2645]: E0909 23:52:53.795673 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.796006 kubelet[2645]: E0909 23:52:53.795906 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.796006 kubelet[2645]: W0909 23:52:53.795920 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.796006 kubelet[2645]: E0909 23:52:53.795943 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.796394 kubelet[2645]: E0909 23:52:53.796375 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.796756 kubelet[2645]: W0909 23:52:53.796482 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.796756 kubelet[2645]: E0909 23:52:53.796506 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.797024 kubelet[2645]: E0909 23:52:53.796930 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.797024 kubelet[2645]: W0909 23:52:53.796945 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.797024 kubelet[2645]: E0909 23:52:53.796957 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:53.798822 kubelet[2645]: E0909 23:52:53.798536 2645 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:52:53.798822 kubelet[2645]: W0909 23:52:53.798709 2645 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:52:53.798822 kubelet[2645]: E0909 23:52:53.798728 2645 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:52:54.136116 containerd[1536]: time="2025-09-09T23:52:54.135587817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:54.137866 containerd[1536]: time="2025-09-09T23:52:54.136262364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 23:52:54.139248 containerd[1536]: time="2025-09-09T23:52:54.139097702Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:54.143850 containerd[1536]: time="2025-09-09T23:52:54.143784096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:54.144383 containerd[1536]: time="2025-09-09T23:52:54.144254739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.142607819s" Sep 9 23:52:54.144475 containerd[1536]: time="2025-09-09T23:52:54.144286696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 23:52:54.148500 containerd[1536]: time="2025-09-09T23:52:54.148357138Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 23:52:54.167740 containerd[1536]: time="2025-09-09T23:52:54.167697345Z" level=info msg="Container fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:54.183890 containerd[1536]: time="2025-09-09T23:52:54.183832402Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\"" Sep 9 23:52:54.185066 containerd[1536]: time="2025-09-09T23:52:54.184801926Z" level=info msg="StartContainer for \"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\"" Sep 9 23:52:54.189057 containerd[1536]: time="2025-09-09T23:52:54.188746618Z" level=info msg="connecting to shim fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9" address="unix:///run/containerd/s/2ae39db8eb963b5f8240364fd57a54591423b64d779aef052c2c610d24e8ecfe" protocol=ttrpc version=3 Sep 9 23:52:54.219691 systemd[1]: Started cri-containerd-fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9.scope - libcontainer container fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9. Sep 9 23:52:54.257212 containerd[1536]: time="2025-09-09T23:52:54.257173223Z" level=info msg="StartContainer for \"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\" returns successfully" Sep 9 23:52:54.272757 systemd[1]: cri-containerd-fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9.scope: Deactivated successfully. Sep 9 23:52:54.295844 containerd[1536]: time="2025-09-09T23:52:54.295784642Z" level=info msg="received exit event container_id:\"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\" id:\"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\" pid:3322 exited_at:{seconds:1757461974 nanos:291876388}" Sep 9 23:52:54.296739 containerd[1536]: time="2025-09-09T23:52:54.295953909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\" id:\"fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9\" pid:3322 exited_at:{seconds:1757461974 nanos:291876388}" Sep 9 23:52:54.345754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa7522349c4b41f878203d0dbbc1d228663bbf175416743038008f1f325857d9-rootfs.mount: Deactivated successfully. Sep 9 23:52:54.637452 kubelet[2645]: E0909 23:52:54.637383 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvndz" podUID="b851fe3b-9228-49ef-96c2-0523568005b4" Sep 9 23:52:54.745109 kubelet[2645]: I0909 23:52:54.745035 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:52:54.746698 containerd[1536]: time="2025-09-09T23:52:54.746641445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 23:52:56.637703 kubelet[2645]: E0909 23:52:56.637163 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvndz" podUID="b851fe3b-9228-49ef-96c2-0523568005b4" Sep 9 23:52:57.587954 containerd[1536]: time="2025-09-09T23:52:57.587902252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:57.588793 containerd[1536]: time="2025-09-09T23:52:57.588544890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 23:52:57.589712 containerd[1536]: time="2025-09-09T23:52:57.589680817Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:57.592311 containerd[1536]: time="2025-09-09T23:52:57.592089982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:52:57.592844 containerd[1536]: time="2025-09-09T23:52:57.592812855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.846125934s" Sep 9 23:52:57.592948 containerd[1536]: time="2025-09-09T23:52:57.592932967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 23:52:57.596651 containerd[1536]: time="2025-09-09T23:52:57.596614010Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 23:52:57.613828 containerd[1536]: time="2025-09-09T23:52:57.613474723Z" level=info msg="Container 165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:52:57.627597 containerd[1536]: time="2025-09-09T23:52:57.627550656Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\"" Sep 9 23:52:57.628196 containerd[1536]: time="2025-09-09T23:52:57.628169736Z" level=info msg="StartContainer for \"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\"" Sep 9 23:52:57.630418 containerd[1536]: time="2025-09-09T23:52:57.629754194Z" level=info msg="connecting to shim 165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce" address="unix:///run/containerd/s/2ae39db8eb963b5f8240364fd57a54591423b64d779aef052c2c610d24e8ecfe" protocol=ttrpc version=3 Sep 9 23:52:57.672669 systemd[1]: Started cri-containerd-165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce.scope - libcontainer container 165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce. Sep 9 23:52:57.711889 containerd[1536]: time="2025-09-09T23:52:57.711846741Z" level=info msg="StartContainer for \"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\" returns successfully" Sep 9 23:52:58.398074 systemd[1]: cri-containerd-165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce.scope: Deactivated successfully. Sep 9 23:52:58.398379 systemd[1]: cri-containerd-165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce.scope: Consumed 504ms CPU time, 177M memory peak, 2.8M read from disk, 165.8M written to disk. Sep 9 23:52:58.399335 containerd[1536]: time="2025-09-09T23:52:58.399278908Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:52:58.408846 containerd[1536]: time="2025-09-09T23:52:58.408799133Z" level=info msg="received exit event container_id:\"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\" id:\"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\" pid:3380 exited_at:{seconds:1757461978 nanos:408379758}" Sep 9 23:52:58.409342 containerd[1536]: time="2025-09-09T23:52:58.409306102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\" id:\"165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce\" pid:3380 exited_at:{seconds:1757461978 nanos:408379758}" Sep 9 23:52:58.429464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-165f139c5f482a21ec75bb310fc608605baa23d3a2b0cbf2bcb9c25fb22e44ce-rootfs.mount: Deactivated successfully. Sep 9 23:52:58.499493 kubelet[2645]: I0909 23:52:58.499455 2645 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 23:52:58.568471 systemd[1]: Created slice kubepods-burstable-pod86814066_4091_4ece_ad21_64b92b0fd215.slice - libcontainer container kubepods-burstable-pod86814066_4091_4ece_ad21_64b92b0fd215.slice. Sep 9 23:52:58.578051 systemd[1]: Created slice kubepods-besteffort-pod202f48c2_44e5_4a2c_a0ad_7f271e2da270.slice - libcontainer container kubepods-besteffort-pod202f48c2_44e5_4a2c_a0ad_7f271e2da270.slice. Sep 9 23:52:58.586999 systemd[1]: Created slice kubepods-besteffort-pod1e8c6f41_6bf9_44ba_ae7b_40afcbd8e181.slice - libcontainer container kubepods-besteffort-pod1e8c6f41_6bf9_44ba_ae7b_40afcbd8e181.slice. Sep 9 23:52:58.592228 systemd[1]: Created slice kubepods-burstable-pod99887b2a_928e_49ac_a824_45d2066ca4ce.slice - libcontainer container kubepods-burstable-pod99887b2a_928e_49ac_a824_45d2066ca4ce.slice. Sep 9 23:52:58.598479 systemd[1]: Created slice kubepods-besteffort-pod88942724_f223_4d02_aa47_d0a55fee0aad.slice - libcontainer container kubepods-besteffort-pod88942724_f223_4d02_aa47_d0a55fee0aad.slice. Sep 9 23:52:58.606125 systemd[1]: Created slice kubepods-besteffort-podeb5cb65b_8443_4026_8241_770ed1091884.slice - libcontainer container kubepods-besteffort-podeb5cb65b_8443_4026_8241_770ed1091884.slice. Sep 9 23:52:58.612058 systemd[1]: Created slice kubepods-besteffort-podb696c188_d2c3_4fbf_94e1_ea923f86a366.slice - libcontainer container kubepods-besteffort-podb696c188_d2c3_4fbf_94e1_ea923f86a366.slice. Sep 9 23:52:58.630160 kubelet[2645]: I0909 23:52:58.629752 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86814066-4091-4ece-ad21-64b92b0fd215-config-volume\") pod \"coredns-7c65d6cfc9-dpjs2\" (UID: \"86814066-4091-4ece-ad21-64b92b0fd215\") " pod="kube-system/coredns-7c65d6cfc9-dpjs2" Sep 9 23:52:58.630160 kubelet[2645]: I0909 23:52:58.629798 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88942724-f223-4d02-aa47-d0a55fee0aad-tigera-ca-bundle\") pod \"calico-kube-controllers-6658bf945-fm77m\" (UID: \"88942724-f223-4d02-aa47-d0a55fee0aad\") " pod="calico-system/calico-kube-controllers-6658bf945-fm77m" Sep 9 23:52:58.630160 kubelet[2645]: I0909 23:52:58.629818 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqxnt\" (UniqueName: \"kubernetes.io/projected/b696c188-d2c3-4fbf-94e1-ea923f86a366-kube-api-access-mqxnt\") pod \"calico-apiserver-66d8b7fdbc-9cghd\" (UID: \"b696c188-d2c3-4fbf-94e1-ea923f86a366\") " pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" Sep 9 23:52:58.630160 kubelet[2645]: I0909 23:52:58.629837 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb5cb65b-8443-4026-8241-770ed1091884-whisker-ca-bundle\") pod \"whisker-9c95b4c7b-zf7pl\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " pod="calico-system/whisker-9c95b4c7b-zf7pl" Sep 9 23:52:58.630160 kubelet[2645]: I0909 23:52:58.629853 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181-config\") pod \"goldmane-7988f88666-4nw59\" (UID: \"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181\") " pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:58.630399 kubelet[2645]: I0909 23:52:58.629872 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxcg\" (UniqueName: \"kubernetes.io/projected/eb5cb65b-8443-4026-8241-770ed1091884-kube-api-access-txxcg\") pod \"whisker-9c95b4c7b-zf7pl\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " pod="calico-system/whisker-9c95b4c7b-zf7pl" Sep 9 23:52:58.630399 kubelet[2645]: I0909 23:52:58.629890 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181-goldmane-key-pair\") pod \"goldmane-7988f88666-4nw59\" (UID: \"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181\") " pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:58.630399 kubelet[2645]: I0909 23:52:58.629910 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb5cb65b-8443-4026-8241-770ed1091884-whisker-backend-key-pair\") pod \"whisker-9c95b4c7b-zf7pl\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " pod="calico-system/whisker-9c95b4c7b-zf7pl" Sep 9 23:52:58.630399 kubelet[2645]: I0909 23:52:58.629925 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/202f48c2-44e5-4a2c-a0ad-7f271e2da270-calico-apiserver-certs\") pod \"calico-apiserver-66d8b7fdbc-hdp5z\" (UID: \"202f48c2-44e5-4a2c-a0ad-7f271e2da270\") " pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" Sep 9 23:52:58.630399 kubelet[2645]: I0909 23:52:58.629942 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b696c188-d2c3-4fbf-94e1-ea923f86a366-calico-apiserver-certs\") pod \"calico-apiserver-66d8b7fdbc-9cghd\" (UID: \"b696c188-d2c3-4fbf-94e1-ea923f86a366\") " pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" Sep 9 23:52:58.630531 kubelet[2645]: I0909 23:52:58.629961 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58sv\" (UniqueName: \"kubernetes.io/projected/86814066-4091-4ece-ad21-64b92b0fd215-kube-api-access-x58sv\") pod \"coredns-7c65d6cfc9-dpjs2\" (UID: \"86814066-4091-4ece-ad21-64b92b0fd215\") " pod="kube-system/coredns-7c65d6cfc9-dpjs2" Sep 9 23:52:58.630531 kubelet[2645]: I0909 23:52:58.629976 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181-goldmane-ca-bundle\") pod \"goldmane-7988f88666-4nw59\" (UID: \"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181\") " pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:58.630531 kubelet[2645]: I0909 23:52:58.630354 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvb9x\" (UniqueName: \"kubernetes.io/projected/1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181-kube-api-access-bvb9x\") pod \"goldmane-7988f88666-4nw59\" (UID: \"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181\") " pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:58.630531 kubelet[2645]: I0909 23:52:58.630414 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8tc\" (UniqueName: \"kubernetes.io/projected/99887b2a-928e-49ac-a824-45d2066ca4ce-kube-api-access-xs8tc\") pod \"coredns-7c65d6cfc9-lncbd\" (UID: \"99887b2a-928e-49ac-a824-45d2066ca4ce\") " pod="kube-system/coredns-7c65d6cfc9-lncbd" Sep 9 23:52:58.630531 kubelet[2645]: I0909 23:52:58.630478 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99887b2a-928e-49ac-a824-45d2066ca4ce-config-volume\") pod \"coredns-7c65d6cfc9-lncbd\" (UID: \"99887b2a-928e-49ac-a824-45d2066ca4ce\") " pod="kube-system/coredns-7c65d6cfc9-lncbd" Sep 9 23:52:58.630637 kubelet[2645]: I0909 23:52:58.630500 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzf8\" (UniqueName: \"kubernetes.io/projected/202f48c2-44e5-4a2c-a0ad-7f271e2da270-kube-api-access-ktzf8\") pod \"calico-apiserver-66d8b7fdbc-hdp5z\" (UID: \"202f48c2-44e5-4a2c-a0ad-7f271e2da270\") " pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" Sep 9 23:52:58.630637 kubelet[2645]: I0909 23:52:58.630550 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdphp\" (UniqueName: \"kubernetes.io/projected/88942724-f223-4d02-aa47-d0a55fee0aad-kube-api-access-kdphp\") pod \"calico-kube-controllers-6658bf945-fm77m\" (UID: \"88942724-f223-4d02-aa47-d0a55fee0aad\") " pod="calico-system/calico-kube-controllers-6658bf945-fm77m" Sep 9 23:52:58.644249 systemd[1]: Created slice kubepods-besteffort-podb851fe3b_9228_49ef_96c2_0523568005b4.slice - libcontainer container kubepods-besteffort-podb851fe3b_9228_49ef_96c2_0523568005b4.slice. Sep 9 23:52:58.649313 containerd[1536]: time="2025-09-09T23:52:58.649118287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvndz,Uid:b851fe3b-9228-49ef-96c2-0523568005b4,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:58.759445 containerd[1536]: time="2025-09-09T23:52:58.756946530Z" level=error msg="Failed to destroy network for sandbox \"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.762335 containerd[1536]: time="2025-09-09T23:52:58.761613968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvndz,Uid:b851fe3b-9228-49ef-96c2-0523568005b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.768334 systemd[1]: run-netns-cni\x2d573db2e0\x2d2ae4\x2d52d4\x2d10f0\x2d40e771a5450b.mount: Deactivated successfully. Sep 9 23:52:58.774325 containerd[1536]: time="2025-09-09T23:52:58.772113013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 23:52:58.775530 kubelet[2645]: E0909 23:52:58.775474 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.776234 kubelet[2645]: E0909 23:52:58.775835 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:58.782081 kubelet[2645]: E0909 23:52:58.782008 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vvndz" Sep 9 23:52:58.782178 kubelet[2645]: E0909 23:52:58.782128 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vvndz_calico-system(b851fe3b-9228-49ef-96c2-0523568005b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vvndz_calico-system(b851fe3b-9228-49ef-96c2-0523568005b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3168b4a400c642567fdf115a062b8ef559ec5e02ab27ce9654b3e692219594f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vvndz" podUID="b851fe3b-9228-49ef-96c2-0523568005b4" Sep 9 23:52:58.876029 containerd[1536]: time="2025-09-09T23:52:58.875985295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpjs2,Uid:86814066-4091-4ece-ad21-64b92b0fd215,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:58.882792 containerd[1536]: time="2025-09-09T23:52:58.882750966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-hdp5z,Uid:202f48c2-44e5-4a2c-a0ad-7f271e2da270,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:58.896676 containerd[1536]: time="2025-09-09T23:52:58.896631887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lncbd,Uid:99887b2a-928e-49ac-a824-45d2066ca4ce,Namespace:kube-system,Attempt:0,}" Sep 9 23:52:58.904763 containerd[1536]: time="2025-09-09T23:52:58.904149193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4nw59,Uid:1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:58.904763 containerd[1536]: time="2025-09-09T23:52:58.904392258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658bf945-fm77m,Uid:88942724-f223-4d02-aa47-d0a55fee0aad,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:58.910063 containerd[1536]: time="2025-09-09T23:52:58.909982400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c95b4c7b-zf7pl,Uid:eb5cb65b-8443-4026-8241-770ed1091884,Namespace:calico-system,Attempt:0,}" Sep 9 23:52:58.917111 containerd[1536]: time="2025-09-09T23:52:58.917054653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-9cghd,Uid:b696c188-d2c3-4fbf-94e1-ea923f86a366,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:52:58.971801 containerd[1536]: time="2025-09-09T23:52:58.971750907Z" level=error msg="Failed to destroy network for sandbox \"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.977446 containerd[1536]: time="2025-09-09T23:52:58.977307731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpjs2,Uid:86814066-4091-4ece-ad21-64b92b0fd215,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.979043 kubelet[2645]: E0909 23:52:58.977817 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.979043 kubelet[2645]: E0909 23:52:58.977884 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dpjs2" Sep 9 23:52:58.979043 kubelet[2645]: E0909 23:52:58.977904 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dpjs2" Sep 9 23:52:58.979229 kubelet[2645]: E0909 23:52:58.977947 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dpjs2_kube-system(86814066-4091-4ece-ad21-64b92b0fd215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dpjs2_kube-system(86814066-4091-4ece-ad21-64b92b0fd215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb9aca5ca3b7092d57d9258d251508720cd066d4f3b4f97ef87510a99ef594b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dpjs2" podUID="86814066-4091-4ece-ad21-64b92b0fd215" Sep 9 23:52:58.994985 containerd[1536]: time="2025-09-09T23:52:58.994858350Z" level=error msg="Failed to destroy network for sandbox \"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.996296 containerd[1536]: time="2025-09-09T23:52:58.996186510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-hdp5z,Uid:202f48c2-44e5-4a2c-a0ad-7f271e2da270,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.996660 kubelet[2645]: E0909 23:52:58.996608 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:58.996728 kubelet[2645]: E0909 23:52:58.996690 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" Sep 9 23:52:58.996728 kubelet[2645]: E0909 23:52:58.996711 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" Sep 9 23:52:58.996874 kubelet[2645]: E0909 23:52:58.996775 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d8b7fdbc-hdp5z_calico-apiserver(202f48c2-44e5-4a2c-a0ad-7f271e2da270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d8b7fdbc-hdp5z_calico-apiserver(202f48c2-44e5-4a2c-a0ad-7f271e2da270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc08e2cf275e424b1ac6c68cf29f1258f7b99caa27a5dee4a29a0d7edd7d0c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" podUID="202f48c2-44e5-4a2c-a0ad-7f271e2da270" Sep 9 23:52:59.018657 containerd[1536]: time="2025-09-09T23:52:59.018568905Z" level=error msg="Failed to destroy network for sandbox \"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.018818 containerd[1536]: time="2025-09-09T23:52:59.018774414Z" level=error msg="Failed to destroy network for sandbox \"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.022357 containerd[1536]: time="2025-09-09T23:52:59.022296734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4nw59,Uid:1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.022830 kubelet[2645]: E0909 23:52:59.022565 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.022830 kubelet[2645]: E0909 23:52:59.022634 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:59.022830 kubelet[2645]: E0909 23:52:59.022653 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-4nw59" Sep 9 23:52:59.023044 kubelet[2645]: E0909 23:52:59.022700 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-4nw59_calico-system(1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-4nw59_calico-system(1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b92c3c89a9d61da3e9c24676e56d67133d40de67bfbaa58beef24a0d9c3e413\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-4nw59" podUID="1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181" Sep 9 23:52:59.023598 containerd[1536]: time="2025-09-09T23:52:59.023373313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-9cghd,Uid:b696c188-d2c3-4fbf-94e1-ea923f86a366,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.023897 kubelet[2645]: E0909 23:52:59.023860 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.023952 kubelet[2645]: E0909 23:52:59.023922 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" Sep 9 23:52:59.023952 kubelet[2645]: E0909 23:52:59.023941 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" Sep 9 23:52:59.024070 kubelet[2645]: E0909 23:52:59.023981 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66d8b7fdbc-9cghd_calico-apiserver(b696c188-d2c3-4fbf-94e1-ea923f86a366)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66d8b7fdbc-9cghd_calico-apiserver(b696c188-d2c3-4fbf-94e1-ea923f86a366)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a854972349b6b7aec382db41dcb573a61ee94e5ae0ae4d06348acdcdeb1d26b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" podUID="b696c188-d2c3-4fbf-94e1-ea923f86a366" Sep 9 23:52:59.027479 containerd[1536]: time="2025-09-09T23:52:59.027204416Z" level=error msg="Failed to destroy network for sandbox \"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.029361 containerd[1536]: time="2025-09-09T23:52:59.029308057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658bf945-fm77m,Uid:88942724-f223-4d02-aa47-d0a55fee0aad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.029701 kubelet[2645]: E0909 23:52:59.029545 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.029701 kubelet[2645]: E0909 23:52:59.029665 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6658bf945-fm77m" Sep 9 23:52:59.029701 kubelet[2645]: E0909 23:52:59.029687 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6658bf945-fm77m" Sep 9 23:52:59.029860 kubelet[2645]: E0909 23:52:59.029755 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6658bf945-fm77m_calico-system(88942724-f223-4d02-aa47-d0a55fee0aad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6658bf945-fm77m_calico-system(88942724-f223-4d02-aa47-d0a55fee0aad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b24f99190bc04bc796c2abfc252c4ed9aafa58b4bf43c65de49e3a76dad5a045\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6658bf945-fm77m" podUID="88942724-f223-4d02-aa47-d0a55fee0aad" Sep 9 23:52:59.030958 containerd[1536]: time="2025-09-09T23:52:59.030652941Z" level=error msg="Failed to destroy network for sandbox \"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.031746 containerd[1536]: time="2025-09-09T23:52:59.031676163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c95b4c7b-zf7pl,Uid:eb5cb65b-8443-4026-8241-770ed1091884,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.032177 kubelet[2645]: E0909 23:52:59.031985 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.032177 kubelet[2645]: E0909 23:52:59.032070 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9c95b4c7b-zf7pl" Sep 9 23:52:59.032177 kubelet[2645]: E0909 23:52:59.032087 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9c95b4c7b-zf7pl" Sep 9 23:52:59.032287 kubelet[2645]: E0909 23:52:59.032136 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9c95b4c7b-zf7pl_calico-system(eb5cb65b-8443-4026-8241-770ed1091884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9c95b4c7b-zf7pl_calico-system(eb5cb65b-8443-4026-8241-770ed1091884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e708d1a1f42f04e75777095ff3260db2089c0313d32528b7fbe2f8a93915b7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9c95b4c7b-zf7pl" podUID="eb5cb65b-8443-4026-8241-770ed1091884" Sep 9 23:52:59.033928 containerd[1536]: time="2025-09-09T23:52:59.033883358Z" level=error msg="Failed to destroy network for sandbox \"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.035706 containerd[1536]: time="2025-09-09T23:52:59.035625619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lncbd,Uid:99887b2a-928e-49ac-a824-45d2066ca4ce,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.035895 kubelet[2645]: E0909 23:52:59.035861 2645 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:52:59.035947 kubelet[2645]: E0909 23:52:59.035917 2645 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lncbd" Sep 9 23:52:59.035947 kubelet[2645]: E0909 23:52:59.035937 2645 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lncbd" Sep 9 23:52:59.036008 kubelet[2645]: E0909 23:52:59.035985 2645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lncbd_kube-system(99887b2a-928e-49ac-a824-45d2066ca4ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lncbd_kube-system(99887b2a-928e-49ac-a824-45d2066ca4ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79acc176dbf82d06ad1d0bd85907d9f226c735e9778f78560d579080b44f10a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lncbd" podUID="99887b2a-928e-49ac-a824-45d2066ca4ce" Sep 9 23:53:02.011708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269390485.mount: Deactivated successfully. Sep 9 23:53:02.221894 containerd[1536]: time="2025-09-09T23:53:02.221018318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:02.223013 containerd[1536]: time="2025-09-09T23:53:02.222858683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 23:53:02.224731 containerd[1536]: time="2025-09-09T23:53:02.224644930Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:02.227060 containerd[1536]: time="2025-09-09T23:53:02.226970287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:02.227925 containerd[1536]: time="2025-09-09T23:53:02.227429838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.455268387s" Sep 9 23:53:02.227925 containerd[1536]: time="2025-09-09T23:53:02.227495997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 23:53:02.256265 containerd[1536]: time="2025-09-09T23:53:02.256203662Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 23:53:02.266871 containerd[1536]: time="2025-09-09T23:53:02.266396352Z" level=info msg="Container 9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:02.290509 containerd[1536]: time="2025-09-09T23:53:02.290424744Z" level=info msg="CreateContainer within sandbox \"783954662368f8f475e9b943ce360042602b1c1b59c32ef99045caedcb9c26e2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\"" Sep 9 23:53:02.291943 containerd[1536]: time="2025-09-09T23:53:02.291912237Z" level=info msg="StartContainer for \"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\"" Sep 9 23:53:02.298194 containerd[1536]: time="2025-09-09T23:53:02.298126801Z" level=info msg="connecting to shim 9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4" address="unix:///run/containerd/s/2ae39db8eb963b5f8240364fd57a54591423b64d779aef052c2c610d24e8ecfe" protocol=ttrpc version=3 Sep 9 23:53:02.340688 systemd[1]: Started cri-containerd-9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4.scope - libcontainer container 9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4. Sep 9 23:53:02.394278 containerd[1536]: time="2025-09-09T23:53:02.394237770Z" level=info msg="StartContainer for \"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\" returns successfully" Sep 9 23:53:02.520804 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 23:53:02.520925 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 23:53:02.654473 kubelet[2645]: I0909 23:53:02.653604 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txxcg\" (UniqueName: \"kubernetes.io/projected/eb5cb65b-8443-4026-8241-770ed1091884-kube-api-access-txxcg\") pod \"eb5cb65b-8443-4026-8241-770ed1091884\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " Sep 9 23:53:02.654473 kubelet[2645]: I0909 23:53:02.653673 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb5cb65b-8443-4026-8241-770ed1091884-whisker-ca-bundle\") pod \"eb5cb65b-8443-4026-8241-770ed1091884\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " Sep 9 23:53:02.654473 kubelet[2645]: I0909 23:53:02.654358 2645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb5cb65b-8443-4026-8241-770ed1091884-whisker-backend-key-pair\") pod \"eb5cb65b-8443-4026-8241-770ed1091884\" (UID: \"eb5cb65b-8443-4026-8241-770ed1091884\") " Sep 9 23:53:02.655053 kubelet[2645]: I0909 23:53:02.654971 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5cb65b-8443-4026-8241-770ed1091884-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eb5cb65b-8443-4026-8241-770ed1091884" (UID: "eb5cb65b-8443-4026-8241-770ed1091884"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 23:53:02.661811 kubelet[2645]: I0909 23:53:02.661766 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5cb65b-8443-4026-8241-770ed1091884-kube-api-access-txxcg" (OuterVolumeSpecName: "kube-api-access-txxcg") pod "eb5cb65b-8443-4026-8241-770ed1091884" (UID: "eb5cb65b-8443-4026-8241-770ed1091884"). InnerVolumeSpecName "kube-api-access-txxcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 23:53:02.667834 kubelet[2645]: I0909 23:53:02.667783 2645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5cb65b-8443-4026-8241-770ed1091884-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eb5cb65b-8443-4026-8241-770ed1091884" (UID: "eb5cb65b-8443-4026-8241-770ed1091884"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 23:53:02.755462 kubelet[2645]: I0909 23:53:02.755394 2645 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb5cb65b-8443-4026-8241-770ed1091884-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 23:53:02.755462 kubelet[2645]: I0909 23:53:02.755442 2645 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb5cb65b-8443-4026-8241-770ed1091884-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 23:53:02.755462 kubelet[2645]: I0909 23:53:02.755453 2645 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txxcg\" (UniqueName: \"kubernetes.io/projected/eb5cb65b-8443-4026-8241-770ed1091884-kube-api-access-txxcg\") on node \"localhost\" DevicePath \"\"" Sep 9 23:53:02.798430 systemd[1]: Removed slice kubepods-besteffort-podeb5cb65b_8443_4026_8241_770ed1091884.slice - libcontainer container kubepods-besteffort-podeb5cb65b_8443_4026_8241_770ed1091884.slice. Sep 9 23:53:02.830149 kubelet[2645]: I0909 23:53:02.829923 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s42bf" podStartSLOduration=1.861854121 podStartE2EDuration="12.829869413s" podCreationTimestamp="2025-09-09 23:52:50 +0000 UTC" firstStartedPulling="2025-09-09 23:52:51.264164018 +0000 UTC m=+23.717448828" lastFinishedPulling="2025-09-09 23:53:02.23217931 +0000 UTC m=+34.685464120" observedRunningTime="2025-09-09 23:53:02.815422842 +0000 UTC m=+35.268707732" watchObservedRunningTime="2025-09-09 23:53:02.829869413 +0000 UTC m=+35.283154223" Sep 9 23:53:02.891846 systemd[1]: Created slice kubepods-besteffort-podbf684ea3_fbc9_43fd_8ce0_9cbaff2281de.slice - libcontainer container kubepods-besteffort-podbf684ea3_fbc9_43fd_8ce0_9cbaff2281de.slice. Sep 9 23:53:02.958263 kubelet[2645]: I0909 23:53:02.957696 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bf684ea3-fbc9-43fd-8ce0-9cbaff2281de-whisker-backend-key-pair\") pod \"whisker-955b99d5f-xzb5f\" (UID: \"bf684ea3-fbc9-43fd-8ce0-9cbaff2281de\") " pod="calico-system/whisker-955b99d5f-xzb5f" Sep 9 23:53:02.958263 kubelet[2645]: I0909 23:53:02.957763 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvq8d\" (UniqueName: \"kubernetes.io/projected/bf684ea3-fbc9-43fd-8ce0-9cbaff2281de-kube-api-access-lvq8d\") pod \"whisker-955b99d5f-xzb5f\" (UID: \"bf684ea3-fbc9-43fd-8ce0-9cbaff2281de\") " pod="calico-system/whisker-955b99d5f-xzb5f" Sep 9 23:53:02.958263 kubelet[2645]: I0909 23:53:02.957791 2645 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf684ea3-fbc9-43fd-8ce0-9cbaff2281de-whisker-ca-bundle\") pod \"whisker-955b99d5f-xzb5f\" (UID: \"bf684ea3-fbc9-43fd-8ce0-9cbaff2281de\") " pod="calico-system/whisker-955b99d5f-xzb5f" Sep 9 23:53:03.012768 systemd[1]: var-lib-kubelet-pods-eb5cb65b\x2d8443\x2d4026\x2d8241\x2d770ed1091884-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxxcg.mount: Deactivated successfully. Sep 9 23:53:03.012865 systemd[1]: var-lib-kubelet-pods-eb5cb65b\x2d8443\x2d4026\x2d8241\x2d770ed1091884-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 23:53:03.195544 containerd[1536]: time="2025-09-09T23:53:03.195501621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955b99d5f-xzb5f,Uid:bf684ea3-fbc9-43fd-8ce0-9cbaff2281de,Namespace:calico-system,Attempt:0,}" Sep 9 23:53:03.398805 systemd-networkd[1431]: cali732045748e7: Link UP Sep 9 23:53:03.399066 systemd-networkd[1431]: cali732045748e7: Gained carrier Sep 9 23:53:03.416623 containerd[1536]: 2025-09-09 23:53:03.221 [INFO][3765] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:53:03.416623 containerd[1536]: 2025-09-09 23:53:03.261 [INFO][3765] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--955b99d5f--xzb5f-eth0 whisker-955b99d5f- calico-system bf684ea3-fbc9-43fd-8ce0-9cbaff2281de 852 0 2025-09-09 23:53:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:955b99d5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-955b99d5f-xzb5f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali732045748e7 [] [] }} ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-" Sep 9 23:53:03.416623 containerd[1536]: 2025-09-09 23:53:03.261 [INFO][3765] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.416623 containerd[1536]: 2025-09-09 23:53:03.343 [INFO][3779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" HandleID="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Workload="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.343 [INFO][3779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" HandleID="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Workload="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011e890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-955b99d5f-xzb5f", "timestamp":"2025-09-09 23:53:03.343383302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.343 [INFO][3779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.343 [INFO][3779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.343 [INFO][3779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.354 [INFO][3779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" host="localhost" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.361 [INFO][3779] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.366 [INFO][3779] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.368 [INFO][3779] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.371 [INFO][3779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:03.417047 containerd[1536]: 2025-09-09 23:53:03.371 [INFO][3779] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" host="localhost" Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.373 [INFO][3779] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.377 [INFO][3779] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" host="localhost" Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.383 [INFO][3779] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" host="localhost" Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.383 [INFO][3779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" host="localhost" Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.383 [INFO][3779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:03.417245 containerd[1536]: 2025-09-09 23:53:03.383 [INFO][3779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" HandleID="k8s-pod-network.ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Workload="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.417354 containerd[1536]: 2025-09-09 23:53:03.385 [INFO][3765] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955b99d5f--xzb5f-eth0", GenerateName:"whisker-955b99d5f-", Namespace:"calico-system", SelfLink:"", UID:"bf684ea3-fbc9-43fd-8ce0-9cbaff2281de", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955b99d5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-955b99d5f-xzb5f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali732045748e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:03.417354 containerd[1536]: 2025-09-09 23:53:03.386 [INFO][3765] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.417422 containerd[1536]: 2025-09-09 23:53:03.387 [INFO][3765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali732045748e7 ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.417422 containerd[1536]: 2025-09-09 23:53:03.399 [INFO][3765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.417508 containerd[1536]: 2025-09-09 23:53:03.399 [INFO][3765] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--955b99d5f--xzb5f-eth0", GenerateName:"whisker-955b99d5f-", Namespace:"calico-system", SelfLink:"", UID:"bf684ea3-fbc9-43fd-8ce0-9cbaff2281de", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"955b99d5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb", Pod:"whisker-955b99d5f-xzb5f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali732045748e7", MAC:"32:c9:7a:55:8e:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:03.417565 containerd[1536]: 2025-09-09 23:53:03.413 [INFO][3765] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" Namespace="calico-system" Pod="whisker-955b99d5f-xzb5f" WorkloadEndpoint="localhost-k8s-whisker--955b99d5f--xzb5f-eth0" Sep 9 23:53:03.451003 containerd[1536]: time="2025-09-09T23:53:03.450957713Z" level=info msg="connecting to shim ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb" address="unix:///run/containerd/s/e565b254b756e879ac32f95821198fb6dbf6eae15351f836cbbbc74ca25b65fc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:03.487696 systemd[1]: Started cri-containerd-ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb.scope - libcontainer container ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb. Sep 9 23:53:03.499697 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:03.522377 containerd[1536]: time="2025-09-09T23:53:03.522325340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-955b99d5f-xzb5f,Uid:bf684ea3-fbc9-43fd-8ce0-9cbaff2281de,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb\"" Sep 9 23:53:03.524227 containerd[1536]: time="2025-09-09T23:53:03.523966590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 23:53:03.640577 kubelet[2645]: I0909 23:53:03.640533 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5cb65b-8443-4026-8241-770ed1091884" path="/var/lib/kubelet/pods/eb5cb65b-8443-4026-8241-770ed1091884/volumes" Sep 9 23:53:03.905470 containerd[1536]: time="2025-09-09T23:53:03.905392840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\" id:\"b6ea672d322b60fd6886e852be6494417f96a464216b01fc40f87a023049b665\" pid:3850 exit_status:1 exited_at:{seconds:1757461983 nanos:904995088}" Sep 9 23:53:04.385118 containerd[1536]: time="2025-09-09T23:53:04.379985713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 23:53:04.387592 containerd[1536]: time="2025-09-09T23:53:04.387545300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:04.388591 containerd[1536]: time="2025-09-09T23:53:04.388532803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 864.524333ms" Sep 9 23:53:04.388591 containerd[1536]: time="2025-09-09T23:53:04.388575882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 23:53:04.389287 containerd[1536]: time="2025-09-09T23:53:04.389250710Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:04.390550 containerd[1536]: time="2025-09-09T23:53:04.390519488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:04.391259 containerd[1536]: time="2025-09-09T23:53:04.391235755Z" level=info msg="CreateContainer within sandbox \"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 23:53:04.398349 containerd[1536]: time="2025-09-09T23:53:04.398303591Z" level=info msg="Container d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:04.407979 containerd[1536]: time="2025-09-09T23:53:04.407928101Z" level=info msg="CreateContainer within sandbox \"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60\"" Sep 9 23:53:04.408651 containerd[1536]: time="2025-09-09T23:53:04.408453932Z" level=info msg="StartContainer for \"d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60\"" Sep 9 23:53:04.409758 containerd[1536]: time="2025-09-09T23:53:04.409728189Z" level=info msg="connecting to shim d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60" address="unix:///run/containerd/s/e565b254b756e879ac32f95821198fb6dbf6eae15351f836cbbbc74ca25b65fc" protocol=ttrpc version=3 Sep 9 23:53:04.430613 systemd[1]: Started cri-containerd-d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60.scope - libcontainer container d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60. Sep 9 23:53:04.465650 containerd[1536]: time="2025-09-09T23:53:04.465610125Z" level=info msg="StartContainer for \"d8cad7929f07c078c3555c1e09eb566003b4bcf8f159047230333b6cdc486c60\" returns successfully" Sep 9 23:53:04.466786 containerd[1536]: time="2025-09-09T23:53:04.466759185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 23:53:04.867899 containerd[1536]: time="2025-09-09T23:53:04.867854080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\" id:\"a532790c8fa12525aa3255ed2d8106cf7435eb5f591498ac53ffee788048eb21\" pid:4016 exit_status:1 exited_at:{seconds:1757461984 nanos:867459287}" Sep 9 23:53:05.250601 systemd-networkd[1431]: cali732045748e7: Gained IPv6LL Sep 9 23:53:05.732635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501034542.mount: Deactivated successfully. Sep 9 23:53:05.967742 containerd[1536]: time="2025-09-09T23:53:05.967666817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:05.968231 containerd[1536]: time="2025-09-09T23:53:05.968192128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 23:53:05.970411 containerd[1536]: time="2025-09-09T23:53:05.970371011Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:05.972628 containerd[1536]: time="2025-09-09T23:53:05.972584413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:05.973630 containerd[1536]: time="2025-09-09T23:53:05.973303040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.506513257s" Sep 9 23:53:05.973630 containerd[1536]: time="2025-09-09T23:53:05.973341800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 23:53:05.976463 containerd[1536]: time="2025-09-09T23:53:05.976422587Z" level=info msg="CreateContainer within sandbox \"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 23:53:06.002340 containerd[1536]: time="2025-09-09T23:53:06.001718514Z" level=info msg="Container e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:06.016239 containerd[1536]: time="2025-09-09T23:53:06.016178993Z" level=info msg="CreateContainer within sandbox \"ef754b72333b7f31e3f56b06195d8fa7070169025b9f7ab8c9d082a5ab8decfb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6\"" Sep 9 23:53:06.016702 containerd[1536]: time="2025-09-09T23:53:06.016677985Z" level=info msg="StartContainer for \"e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6\"" Sep 9 23:53:06.018401 containerd[1536]: time="2025-09-09T23:53:06.018364677Z" level=info msg="connecting to shim e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6" address="unix:///run/containerd/s/e565b254b756e879ac32f95821198fb6dbf6eae15351f836cbbbc74ca25b65fc" protocol=ttrpc version=3 Sep 9 23:53:06.041637 systemd[1]: Started cri-containerd-e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6.scope - libcontainer container e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6. Sep 9 23:53:06.115652 containerd[1536]: time="2025-09-09T23:53:06.115604337Z" level=info msg="StartContainer for \"e46e0450f87bfccbf8864f2e3ef1fb8497918f202c2e3545f1253e007d4f56c6\" returns successfully" Sep 9 23:53:08.258413 kubelet[2645]: I0909 23:53:08.258344 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:53:08.300094 kubelet[2645]: I0909 23:53:08.299090 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-955b99d5f-xzb5f" podStartSLOduration=3.848225385 podStartE2EDuration="6.299069769s" podCreationTimestamp="2025-09-09 23:53:02 +0000 UTC" firstStartedPulling="2025-09-09 23:53:03.523659716 +0000 UTC m=+35.976944486" lastFinishedPulling="2025-09-09 23:53:05.97450406 +0000 UTC m=+38.427788870" observedRunningTime="2025-09-09 23:53:07.000984309 +0000 UTC m=+39.454269159" watchObservedRunningTime="2025-09-09 23:53:08.299069769 +0000 UTC m=+40.752354579" Sep 9 23:53:09.557944 systemd-networkd[1431]: vxlan.calico: Link UP Sep 9 23:53:09.557952 systemd-networkd[1431]: vxlan.calico: Gained carrier Sep 9 23:53:09.642060 containerd[1536]: time="2025-09-09T23:53:09.642021798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lncbd,Uid:99887b2a-928e-49ac-a824-45d2066ca4ce,Namespace:kube-system,Attempt:0,}" Sep 9 23:53:09.679490 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:58710.service - OpenSSH per-connection server daemon (10.0.0.1:58710). Sep 9 23:53:09.764846 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 58710 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:09.766872 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:09.779474 systemd-logind[1506]: New session 8 of user core. Sep 9 23:53:09.786022 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 23:53:09.809975 systemd-networkd[1431]: cali9b3595931e7: Link UP Sep 9 23:53:09.810390 systemd-networkd[1431]: cali9b3595931e7: Gained carrier Sep 9 23:53:09.826904 containerd[1536]: 2025-09-09 23:53:09.709 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0 coredns-7c65d6cfc9- kube-system 99887b2a-928e-49ac-a824-45d2066ca4ce 794 0 2025-09-09 23:52:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-lncbd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b3595931e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-" Sep 9 23:53:09.826904 containerd[1536]: 2025-09-09 23:53:09.709 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.826904 containerd[1536]: 2025-09-09 23:53:09.745 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" HandleID="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Workload="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.745 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" HandleID="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Workload="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-lncbd", "timestamp":"2025-09-09 23:53:09.744986099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.745 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.745 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.745 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.758 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" host="localhost" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.767 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.773 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.777 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.782 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:09.827115 containerd[1536]: 2025-09-09 23:53:09.782 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" host="localhost" Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.788 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.793 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" host="localhost" Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.801 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" host="localhost" Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.801 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" host="localhost" Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.802 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:09.827337 containerd[1536]: 2025-09-09 23:53:09.802 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" HandleID="k8s-pod-network.2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Workload="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.827554 containerd[1536]: 2025-09-09 23:53:09.806 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99887b2a-928e-49ac-a824-45d2066ca4ce", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-lncbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b3595931e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:09.828611 containerd[1536]: 2025-09-09 23:53:09.806 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.828611 containerd[1536]: 2025-09-09 23:53:09.806 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b3595931e7 ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.828611 containerd[1536]: 2025-09-09 23:53:09.811 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.828798 containerd[1536]: 2025-09-09 23:53:09.811 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"99887b2a-928e-49ac-a824-45d2066ca4ce", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d", Pod:"coredns-7c65d6cfc9-lncbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b3595931e7", MAC:"62:53:3e:cf:21:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:09.828798 containerd[1536]: 2025-09-09 23:53:09.824 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lncbd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lncbd-eth0" Sep 9 23:53:09.867745 containerd[1536]: time="2025-09-09T23:53:09.867621259Z" level=info msg="connecting to shim 2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d" address="unix:///run/containerd/s/edf87cdde35abc3047141f7731dcc422d04e12feee483f67e46ff50d896e3c10" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:09.901664 systemd[1]: Started cri-containerd-2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d.scope - libcontainer container 2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d. Sep 9 23:53:09.916365 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:09.953146 containerd[1536]: time="2025-09-09T23:53:09.953106748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lncbd,Uid:99887b2a-928e-49ac-a824-45d2066ca4ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d\"" Sep 9 23:53:09.957854 containerd[1536]: time="2025-09-09T23:53:09.957288564Z" level=info msg="CreateContainer within sandbox \"2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:53:09.974371 containerd[1536]: time="2025-09-09T23:53:09.974331983Z" level=info msg="Container fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:09.976602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936205765.mount: Deactivated successfully. Sep 9 23:53:09.980426 sshd[4307]: Connection closed by 10.0.0.1 port 58710 Sep 9 23:53:09.980775 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:09.982895 containerd[1536]: time="2025-09-09T23:53:09.982841892Z" level=info msg="CreateContainer within sandbox \"2b46f8f48b74f5789a7463fae202e4bd16523e7600a7fe3e7d955736b6f4f62d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767\"" Sep 9 23:53:09.985453 containerd[1536]: time="2025-09-09T23:53:09.985320534Z" level=info msg="StartContainer for \"fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767\"" Sep 9 23:53:09.985460 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:58710.service: Deactivated successfully. Sep 9 23:53:09.986724 containerd[1536]: time="2025-09-09T23:53:09.986689753Z" level=info msg="connecting to shim fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767" address="unix:///run/containerd/s/edf87cdde35abc3047141f7731dcc422d04e12feee483f67e46ff50d896e3c10" protocol=ttrpc version=3 Sep 9 23:53:09.989486 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 23:53:09.991970 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. Sep 9 23:53:09.993310 systemd-logind[1506]: Removed session 8. Sep 9 23:53:10.014643 systemd[1]: Started cri-containerd-fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767.scope - libcontainer container fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767. Sep 9 23:53:10.043100 containerd[1536]: time="2025-09-09T23:53:10.042962148Z" level=info msg="StartContainer for \"fd39a9afafb9e73921d1104f1e82674153733219ec5e1b377ff72d3df3842767\" returns successfully" Sep 9 23:53:11.011095 kubelet[2645]: I0909 23:53:11.009015 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lncbd" podStartSLOduration=37.008969021 podStartE2EDuration="37.008969021s" podCreationTimestamp="2025-09-09 23:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:53:11.008085233 +0000 UTC m=+43.461370003" watchObservedRunningTime="2025-09-09 23:53:11.008969021 +0000 UTC m=+43.462253871" Sep 9 23:53:11.266988 systemd-networkd[1431]: cali9b3595931e7: Gained IPv6LL Sep 9 23:53:11.458666 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Sep 9 23:53:11.667994 containerd[1536]: time="2025-09-09T23:53:11.667747537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658bf945-fm77m,Uid:88942724-f223-4d02-aa47-d0a55fee0aad,Namespace:calico-system,Attempt:0,}" Sep 9 23:53:11.811202 systemd-networkd[1431]: cali725dc3abc8f: Link UP Sep 9 23:53:11.811691 systemd-networkd[1431]: cali725dc3abc8f: Gained carrier Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.712 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0 calico-kube-controllers-6658bf945- calico-system 88942724-f223-4d02-aa47-d0a55fee0aad 797 0 2025-09-09 23:52:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6658bf945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6658bf945-fm77m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali725dc3abc8f [] [] }} ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.712 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.745 [INFO][4448] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" HandleID="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Workload="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.745 [INFO][4448] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" HandleID="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Workload="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6658bf945-fm77m", "timestamp":"2025-09-09 23:53:11.745671686 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.745 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.745 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.745 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.766 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.774 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.780 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.783 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.787 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.787 [INFO][4448] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.790 [INFO][4448] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115 Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.797 [INFO][4448] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.805 [INFO][4448] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.805 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" host="localhost" Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.806 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:11.829605 containerd[1536]: 2025-09-09 23:53:11.806 [INFO][4448] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" HandleID="k8s-pod-network.f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Workload="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.808 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0", GenerateName:"calico-kube-controllers-6658bf945-", Namespace:"calico-system", SelfLink:"", UID:"88942724-f223-4d02-aa47-d0a55fee0aad", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6658bf945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6658bf945-fm77m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali725dc3abc8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.808 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.808 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali725dc3abc8f ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.812 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.813 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0", GenerateName:"calico-kube-controllers-6658bf945-", Namespace:"calico-system", SelfLink:"", UID:"88942724-f223-4d02-aa47-d0a55fee0aad", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6658bf945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115", Pod:"calico-kube-controllers-6658bf945-fm77m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali725dc3abc8f", MAC:"42:60:49:62:15:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:11.830222 containerd[1536]: 2025-09-09 23:53:11.826 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" Namespace="calico-system" Pod="calico-kube-controllers-6658bf945-fm77m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6658bf945--fm77m-eth0" Sep 9 23:53:11.855468 containerd[1536]: time="2025-09-09T23:53:11.855383374Z" level=info msg="connecting to shim f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115" address="unix:///run/containerd/s/62cac88b1fffcd80e93fe5cf4159dc47209b2730f00e7ea373616eee741989d7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:11.880654 systemd[1]: Started cri-containerd-f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115.scope - libcontainer container f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115. Sep 9 23:53:11.892206 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:11.917677 containerd[1536]: time="2025-09-09T23:53:11.917539791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6658bf945-fm77m,Uid:88942724-f223-4d02-aa47-d0a55fee0aad,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115\"" Sep 9 23:53:11.919294 containerd[1536]: time="2025-09-09T23:53:11.919187087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 23:53:12.638308 containerd[1536]: time="2025-09-09T23:53:12.637648465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpjs2,Uid:86814066-4091-4ece-ad21-64b92b0fd215,Namespace:kube-system,Attempt:0,}" Sep 9 23:53:12.786730 systemd-networkd[1431]: cali36a2654838c: Link UP Sep 9 23:53:12.787311 systemd-networkd[1431]: cali36a2654838c: Gained carrier Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.696 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0 coredns-7c65d6cfc9- kube-system 86814066-4091-4ece-ad21-64b92b0fd215 792 0 2025-09-09 23:52:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dpjs2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36a2654838c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.696 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.731 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" HandleID="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Workload="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.731 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" HandleID="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Workload="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b0e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dpjs2", "timestamp":"2025-09-09 23:53:12.731169704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.731 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.731 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.731 [INFO][4526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.741 [INFO][4526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.749 [INFO][4526] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.756 [INFO][4526] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.759 [INFO][4526] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.762 [INFO][4526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.762 [INFO][4526] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.765 [INFO][4526] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285 Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.770 [INFO][4526] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.779 [INFO][4526] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.779 [INFO][4526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" host="localhost" Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.779 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:12.804910 containerd[1536]: 2025-09-09 23:53:12.779 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" HandleID="k8s-pod-network.3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Workload="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.781 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86814066-4091-4ece-ad21-64b92b0fd215", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dpjs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a2654838c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.781 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.782 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36a2654838c ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.787 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.788 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"86814066-4091-4ece-ad21-64b92b0fd215", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285", Pod:"coredns-7c65d6cfc9-dpjs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36a2654838c", MAC:"c2:f3:a2:dc:b8:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:12.806226 containerd[1536]: 2025-09-09 23:53:12.800 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dpjs2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dpjs2-eth0" Sep 9 23:53:12.836278 containerd[1536]: time="2025-09-09T23:53:12.836038943Z" level=info msg="connecting to shim 3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285" address="unix:///run/containerd/s/bcc0e06f3fc45a6f4f05451e6eabe6a8adef679a499793b2f5b3f696b5d78f3e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:12.865566 systemd[1]: Started cri-containerd-3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285.scope - libcontainer container 3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285. Sep 9 23:53:12.886549 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:12.927903 containerd[1536]: time="2025-09-09T23:53:12.926424106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpjs2,Uid:86814066-4091-4ece-ad21-64b92b0fd215,Namespace:kube-system,Attempt:0,} returns sandbox id \"3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285\"" Sep 9 23:53:12.937495 containerd[1536]: time="2025-09-09T23:53:12.937326672Z" level=info msg="CreateContainer within sandbox \"3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 23:53:12.960696 containerd[1536]: time="2025-09-09T23:53:12.960635262Z" level=info msg="Container 5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:12.968165 containerd[1536]: time="2025-09-09T23:53:12.968080317Z" level=info msg="CreateContainer within sandbox \"3041079fc41672a2cc9f99481d0852fd3b8ee3672fbc08274773407746ec9285\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b\"" Sep 9 23:53:12.969078 containerd[1536]: time="2025-09-09T23:53:12.969045184Z" level=info msg="StartContainer for \"5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b\"" Sep 9 23:53:12.969996 containerd[1536]: time="2025-09-09T23:53:12.969969491Z" level=info msg="connecting to shim 5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b" address="unix:///run/containerd/s/bcc0e06f3fc45a6f4f05451e6eabe6a8adef679a499793b2f5b3f696b5d78f3e" protocol=ttrpc version=3 Sep 9 23:53:13.007704 systemd[1]: Started cri-containerd-5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b.scope - libcontainer container 5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b. Sep 9 23:53:13.092955 containerd[1536]: time="2025-09-09T23:53:13.092919508Z" level=info msg="StartContainer for \"5bb1259b4adfea92a89617fc99fe8ab4009e3b611efe61c9ca47174b7c0f9b7b\" returns successfully" Sep 9 23:53:13.186617 systemd-networkd[1431]: cali725dc3abc8f: Gained IPv6LL Sep 9 23:53:13.573048 containerd[1536]: time="2025-09-09T23:53:13.572979187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:13.573921 containerd[1536]: time="2025-09-09T23:53:13.573882855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 23:53:13.574986 containerd[1536]: time="2025-09-09T23:53:13.574958960Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:13.591023 containerd[1536]: time="2025-09-09T23:53:13.590946700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:13.591496 containerd[1536]: time="2025-09-09T23:53:13.591459493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.672231366s" Sep 9 23:53:13.591562 containerd[1536]: time="2025-09-09T23:53:13.591499292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 23:53:13.599384 containerd[1536]: time="2025-09-09T23:53:13.599344465Z" level=info msg="CreateContainer within sandbox \"f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 23:53:13.606272 containerd[1536]: time="2025-09-09T23:53:13.606205570Z" level=info msg="Container 66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:13.614056 containerd[1536]: time="2025-09-09T23:53:13.613926984Z" level=info msg="CreateContainer within sandbox \"f7d7ef15dad3fb5f8bd8e81bcc3bfba9d741401e9597813b2c849156920eb115\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\"" Sep 9 23:53:13.614678 containerd[1536]: time="2025-09-09T23:53:13.614491776Z" level=info msg="StartContainer for \"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\"" Sep 9 23:53:13.615626 containerd[1536]: time="2025-09-09T23:53:13.615596921Z" level=info msg="connecting to shim 66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8" address="unix:///run/containerd/s/62cac88b1fffcd80e93fe5cf4159dc47209b2730f00e7ea373616eee741989d7" protocol=ttrpc version=3 Sep 9 23:53:13.634646 systemd[1]: Started cri-containerd-66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8.scope - libcontainer container 66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8. Sep 9 23:53:13.638521 containerd[1536]: time="2025-09-09T23:53:13.638477126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvndz,Uid:b851fe3b-9228-49ef-96c2-0523568005b4,Namespace:calico-system,Attempt:0,}" Sep 9 23:53:13.690093 containerd[1536]: time="2025-09-09T23:53:13.690001498Z" level=info msg="StartContainer for \"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\" returns successfully" Sep 9 23:53:13.765627 systemd-networkd[1431]: cali165d060e1fe: Link UP Sep 9 23:53:13.766230 systemd-networkd[1431]: cali165d060e1fe: Gained carrier Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.686 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vvndz-eth0 csi-node-driver- calico-system b851fe3b-9228-49ef-96c2-0523568005b4 700 0 2025-09-09 23:52:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vvndz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali165d060e1fe [] [] }} ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.686 [INFO][4651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.714 [INFO][4680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" HandleID="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Workload="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.714 [INFO][4680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" HandleID="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Workload="localhost-k8s-csi--node--driver--vvndz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vvndz", "timestamp":"2025-09-09 23:53:13.714376283 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.714 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.714 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.714 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.724 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.732 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.739 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.741 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.744 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.744 [INFO][4680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.746 [INFO][4680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347 Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.750 [INFO][4680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.759 [INFO][4680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.759 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" host="localhost" Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.759 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:13.782318 containerd[1536]: 2025-09-09 23:53:13.759 [INFO][4680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" HandleID="k8s-pod-network.1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Workload="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.761 [INFO][4651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vvndz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b851fe3b-9228-49ef-96c2-0523568005b4", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vvndz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165d060e1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.761 [INFO][4651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.762 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali165d060e1fe ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.764 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.767 [INFO][4651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vvndz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b851fe3b-9228-49ef-96c2-0523568005b4", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347", Pod:"csi-node-driver-vvndz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali165d060e1fe", MAC:"7a:05:bc:ec:5b:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:13.782933 containerd[1536]: 2025-09-09 23:53:13.779 [INFO][4651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" Namespace="calico-system" Pod="csi-node-driver-vvndz" WorkloadEndpoint="localhost-k8s-csi--node--driver--vvndz-eth0" Sep 9 23:53:13.816003 containerd[1536]: time="2025-09-09T23:53:13.815944526Z" level=info msg="connecting to shim 1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347" address="unix:///run/containerd/s/745a9d8eb80d0dbdb8352a316eeb55577b564cea879044f79412c9c7272fcc07" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:13.852637 systemd[1]: Started cri-containerd-1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347.scope - libcontainer container 1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347. Sep 9 23:53:13.865471 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:13.878252 containerd[1536]: time="2025-09-09T23:53:13.878214590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvndz,Uid:b851fe3b-9228-49ef-96c2-0523568005b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347\"" Sep 9 23:53:13.880567 containerd[1536]: time="2025-09-09T23:53:13.880535918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 23:53:14.020792 systemd-networkd[1431]: cali36a2654838c: Gained IPv6LL Sep 9 23:53:14.046150 kubelet[2645]: I0909 23:53:14.046077 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dpjs2" podStartSLOduration=40.046057978 podStartE2EDuration="40.046057978s" podCreationTimestamp="2025-09-09 23:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 23:53:14.026729077 +0000 UTC m=+46.480013847" watchObservedRunningTime="2025-09-09 23:53:14.046057978 +0000 UTC m=+46.499342788" Sep 9 23:53:14.064006 kubelet[2645]: I0909 23:53:14.063935 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6658bf945-fm77m" podStartSLOduration=21.390120754 podStartE2EDuration="23.063914459s" podCreationTimestamp="2025-09-09 23:52:51 +0000 UTC" firstStartedPulling="2025-09-09 23:53:11.918928851 +0000 UTC m=+44.372213661" lastFinishedPulling="2025-09-09 23:53:13.592722556 +0000 UTC m=+46.046007366" observedRunningTime="2025-09-09 23:53:14.063231348 +0000 UTC m=+46.516516158" watchObservedRunningTime="2025-09-09 23:53:14.063914459 +0000 UTC m=+46.517199269" Sep 9 23:53:14.114748 containerd[1536]: time="2025-09-09T23:53:14.114604381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\" id:\"23515908f5ef15cbc624ed95c6b36ec0995df07c5d04e0ca10083ccea339bdb1\" pid:4765 exited_at:{seconds:1757461994 nanos:107996789}" Sep 9 23:53:14.638287 containerd[1536]: time="2025-09-09T23:53:14.637891816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-hdp5z,Uid:202f48c2-44e5-4a2c-a0ad-7f271e2da270,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:53:14.638287 containerd[1536]: time="2025-09-09T23:53:14.637891736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-9cghd,Uid:b696c188-d2c3-4fbf-94e1-ea923f86a366,Namespace:calico-apiserver,Attempt:0,}" Sep 9 23:53:14.640168 containerd[1536]: time="2025-09-09T23:53:14.639384236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4nw59,Uid:1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181,Namespace:calico-system,Attempt:0,}" Sep 9 23:53:14.838512 systemd-networkd[1431]: calib2860850688: Link UP Sep 9 23:53:14.840896 systemd-networkd[1431]: calib2860850688: Gained carrier Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.713 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0 calico-apiserver-66d8b7fdbc- calico-apiserver b696c188-d2c3-4fbf-94e1-ea923f86a366 796 0 2025-09-09 23:52:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d8b7fdbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66d8b7fdbc-9cghd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2860850688 [] [] }} ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.714 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4836] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" HandleID="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4836] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" HandleID="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66d8b7fdbc-9cghd", "timestamp":"2025-09-09 23:53:14.758173046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.759 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.781 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.788 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.796 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.799 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.804 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.805 [INFO][4836] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.808 [INFO][4836] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077 Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.815 [INFO][4836] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.824 [INFO][4836] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.824 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" host="localhost" Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.825 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:14.862138 containerd[1536]: 2025-09-09 23:53:14.827 [INFO][4836] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" HandleID="k8s-pod-network.a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.833 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0", GenerateName:"calico-apiserver-66d8b7fdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b696c188-d2c3-4fbf-94e1-ea923f86a366", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8b7fdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66d8b7fdbc-9cghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2860850688", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.833 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.833 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2860850688 ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.844 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.844 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0", GenerateName:"calico-apiserver-66d8b7fdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"b696c188-d2c3-4fbf-94e1-ea923f86a366", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8b7fdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077", Pod:"calico-apiserver-66d8b7fdbc-9cghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2860850688", MAC:"a6:38:10:6b:1e:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:14.863143 containerd[1536]: 2025-09-09 23:53:14.856 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-9cghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--9cghd-eth0" Sep 9 23:53:14.885101 containerd[1536]: time="2025-09-09T23:53:14.885050267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:14.885521 containerd[1536]: time="2025-09-09T23:53:14.885458022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 23:53:14.887754 containerd[1536]: time="2025-09-09T23:53:14.887714432Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:14.890901 containerd[1536]: time="2025-09-09T23:53:14.890801870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:14.891485 containerd[1536]: time="2025-09-09T23:53:14.891451742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.010858345s" Sep 9 23:53:14.891535 containerd[1536]: time="2025-09-09T23:53:14.891491021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 23:53:14.900421 containerd[1536]: time="2025-09-09T23:53:14.900374902Z" level=info msg="CreateContainer within sandbox \"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 23:53:14.914377 containerd[1536]: time="2025-09-09T23:53:14.914282676Z" level=info msg="connecting to shim a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077" address="unix:///run/containerd/s/db37bfabe3ce68d1d695d74ac552d9f0dac018358d9391b8a89c904296977672" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:14.915726 containerd[1536]: time="2025-09-09T23:53:14.915635698Z" level=info msg="Container d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:14.932971 containerd[1536]: time="2025-09-09T23:53:14.932927107Z" level=info msg="CreateContainer within sandbox \"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273\"" Sep 9 23:53:14.933952 containerd[1536]: time="2025-09-09T23:53:14.933827495Z" level=info msg="StartContainer for \"d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273\"" Sep 9 23:53:14.936775 systemd-networkd[1431]: caliaaf74ffc53e: Link UP Sep 9 23:53:14.937879 systemd-networkd[1431]: caliaaf74ffc53e: Gained carrier Sep 9 23:53:14.940131 containerd[1536]: time="2025-09-09T23:53:14.940083371Z" level=info msg="connecting to shim d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273" address="unix:///run/containerd/s/745a9d8eb80d0dbdb8352a316eeb55577b564cea879044f79412c9c7272fcc07" protocol=ttrpc version=3 Sep 9 23:53:14.957692 systemd[1]: Started cri-containerd-a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077.scope - libcontainer container a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077. Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.711 [INFO][4786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0 calico-apiserver-66d8b7fdbc- calico-apiserver 202f48c2-44e5-4a2c-a0ad-7f271e2da270 791 0 2025-09-09 23:52:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66d8b7fdbc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66d8b7fdbc-hdp5z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaaf74ffc53e [] [] }} ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.712 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" HandleID="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" HandleID="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000430f90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66d8b7fdbc-hdp5z", "timestamp":"2025-09-09 23:53:14.758030528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.758 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.825 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.825 [INFO][4827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.881 [INFO][4827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.889 [INFO][4827] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.898 [INFO][4827] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.901 [INFO][4827] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.904 [INFO][4827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.904 [INFO][4827] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.907 [INFO][4827] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.913 [INFO][4827] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4827] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" host="localhost" Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:14.959587 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" HandleID="k8s-pod-network.b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Workload="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.932 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0", GenerateName:"calico-apiserver-66d8b7fdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"202f48c2-44e5-4a2c-a0ad-7f271e2da270", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8b7fdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66d8b7fdbc-hdp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaaf74ffc53e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.932 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.932 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaaf74ffc53e ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.936 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.937 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0", GenerateName:"calico-apiserver-66d8b7fdbc-", Namespace:"calico-apiserver", SelfLink:"", UID:"202f48c2-44e5-4a2c-a0ad-7f271e2da270", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66d8b7fdbc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be", Pod:"calico-apiserver-66d8b7fdbc-hdp5z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaaf74ffc53e", MAC:"be:82:00:4f:64:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:14.960098 containerd[1536]: 2025-09-09 23:53:14.952 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" Namespace="calico-apiserver" Pod="calico-apiserver-66d8b7fdbc-hdp5z" WorkloadEndpoint="localhost-k8s-calico--apiserver--66d8b7fdbc--hdp5z-eth0" Sep 9 23:53:14.980762 systemd[1]: Started cri-containerd-d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273.scope - libcontainer container d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273. Sep 9 23:53:14.997803 containerd[1536]: time="2025-09-09T23:53:14.997755319Z" level=info msg="connecting to shim b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be" address="unix:///run/containerd/s/a9b59e1e29c80cedab128ede648e16bcfeabc9ee4b7d49bb0e1948e9ae021735" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:15.004339 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:36788.service - OpenSSH per-connection server daemon (10.0.0.1:36788). Sep 9 23:53:15.007890 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:15.046844 systemd[1]: Started cri-containerd-b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be.scope - libcontainer container b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be. Sep 9 23:53:15.052055 systemd-networkd[1431]: calidef293dcb8b: Link UP Sep 9 23:53:15.052866 systemd-networkd[1431]: calidef293dcb8b: Gained carrier Sep 9 23:53:15.068479 containerd[1536]: time="2025-09-09T23:53:15.068403997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-9cghd,Uid:b696c188-d2c3-4fbf-94e1-ea923f86a366,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077\"" Sep 9 23:53:15.070784 containerd[1536]: time="2025-09-09T23:53:15.070477570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.736 [INFO][4810] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--4nw59-eth0 goldmane-7988f88666- calico-system 1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181 793 0 2025-09-09 23:52:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-4nw59 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidef293dcb8b [] [] }} ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.736 [INFO][4810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.794 [INFO][4847] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" HandleID="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Workload="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.795 [INFO][4847] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" HandleID="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Workload="localhost-k8s-goldmane--7988f88666--4nw59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051bdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-4nw59", "timestamp":"2025-09-09 23:53:14.794944114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.795 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.922 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.981 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:14.990 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.003 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.009 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.019 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.019 [INFO][4847] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.023 [INFO][4847] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696 Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.027 [INFO][4847] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.037 [INFO][4847] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.037 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" host="localhost" Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.037 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:53:15.083181 containerd[1536]: 2025-09-09 23:53:15.037 [INFO][4847] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" HandleID="k8s-pod-network.daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Workload="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.047 [INFO][4810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4nw59-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-4nw59", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidef293dcb8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.048 [INFO][4810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.048 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidef293dcb8b ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.052 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.053 [INFO][4810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--4nw59-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696", Pod:"goldmane-7988f88666-4nw59", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidef293dcb8b", MAC:"52:72:e2:25:18:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:53:15.083762 containerd[1536]: 2025-09-09 23:53:15.071 [INFO][4810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" Namespace="calico-system" Pod="goldmane-7988f88666-4nw59" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--4nw59-eth0" Sep 9 23:53:15.101864 sshd[4960]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:15.102853 containerd[1536]: time="2025-09-09T23:53:15.102816748Z" level=info msg="StartContainer for \"d3809f7f3551c3ea0204b7bd9e65a142fcbe9b625d76aca77bbdd2829c672273\" returns successfully" Sep 9 23:53:15.103886 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:15.107953 systemd-networkd[1431]: cali165d060e1fe: Gained IPv6LL Sep 9 23:53:15.109289 systemd-logind[1506]: New session 9 of user core. Sep 9 23:53:15.116213 containerd[1536]: time="2025-09-09T23:53:15.116142855Z" level=info msg="connecting to shim daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696" address="unix:///run/containerd/s/52dadd61bc141b5da1bc2794c9dc3c5d9c149f128a2df59ee3b83e7b2b097c5d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 23:53:15.117198 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 23:53:15.121392 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:15.147581 containerd[1536]: time="2025-09-09T23:53:15.147484926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66d8b7fdbc-hdp5z,Uid:202f48c2-44e5-4a2c-a0ad-7f271e2da270,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be\"" Sep 9 23:53:15.148608 systemd[1]: Started cri-containerd-daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696.scope - libcontainer container daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696. Sep 9 23:53:15.167259 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:53:15.197392 containerd[1536]: time="2025-09-09T23:53:15.197351476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-4nw59,Uid:1e8c6f41-6bf9-44ba-ae7b-40afcbd8e181,Namespace:calico-system,Attempt:0,} returns sandbox id \"daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696\"" Sep 9 23:53:15.345608 sshd[5025]: Connection closed by 10.0.0.1 port 36788 Sep 9 23:53:15.345923 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:15.350371 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:36788.service: Deactivated successfully. Sep 9 23:53:15.352356 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 23:53:15.353219 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. Sep 9 23:53:15.354320 systemd-logind[1506]: Removed session 9. Sep 9 23:53:16.261127 systemd-networkd[1431]: calidef293dcb8b: Gained IPv6LL Sep 9 23:53:16.834992 systemd-networkd[1431]: calib2860850688: Gained IPv6LL Sep 9 23:53:16.962620 systemd-networkd[1431]: caliaaf74ffc53e: Gained IPv6LL Sep 9 23:53:17.092190 containerd[1536]: time="2025-09-09T23:53:17.092061948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:17.092908 containerd[1536]: time="2025-09-09T23:53:17.092783379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 23:53:17.094032 containerd[1536]: time="2025-09-09T23:53:17.093992284Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:17.096048 containerd[1536]: time="2025-09-09T23:53:17.096007779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:17.096855 containerd[1536]: time="2025-09-09T23:53:17.096817169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.0263022s" Sep 9 23:53:17.096912 containerd[1536]: time="2025-09-09T23:53:17.096855368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:53:17.098461 containerd[1536]: time="2025-09-09T23:53:17.098109913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 23:53:17.100999 containerd[1536]: time="2025-09-09T23:53:17.100656841Z" level=info msg="CreateContainer within sandbox \"a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:53:17.106110 containerd[1536]: time="2025-09-09T23:53:17.106073934Z" level=info msg="Container 37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:17.115880 containerd[1536]: time="2025-09-09T23:53:17.115839174Z" level=info msg="CreateContainer within sandbox \"a2a46c980da435bfbb7737330a3a685490178a6a2b1213f79d43fc353dcf1077\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413\"" Sep 9 23:53:17.116374 containerd[1536]: time="2025-09-09T23:53:17.116315008Z" level=info msg="StartContainer for \"37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413\"" Sep 9 23:53:17.117369 containerd[1536]: time="2025-09-09T23:53:17.117344515Z" level=info msg="connecting to shim 37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413" address="unix:///run/containerd/s/db37bfabe3ce68d1d695d74ac552d9f0dac018358d9391b8a89c904296977672" protocol=ttrpc version=3 Sep 9 23:53:17.143667 systemd[1]: Started cri-containerd-37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413.scope - libcontainer container 37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413. Sep 9 23:53:17.183897 containerd[1536]: time="2025-09-09T23:53:17.183820614Z" level=info msg="StartContainer for \"37ea5dcd22674ecafdab404f31b3019c6720c2036e9a5e0ce34f5ccbf9209413\" returns successfully" Sep 9 23:53:18.328233 containerd[1536]: time="2025-09-09T23:53:18.328165057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:18.329466 containerd[1536]: time="2025-09-09T23:53:18.329421802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 23:53:18.330484 containerd[1536]: time="2025-09-09T23:53:18.330456789Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:18.333341 containerd[1536]: time="2025-09-09T23:53:18.333297955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:18.334035 containerd[1536]: time="2025-09-09T23:53:18.333990827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.235843875s" Sep 9 23:53:18.334035 containerd[1536]: time="2025-09-09T23:53:18.334032666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 23:53:18.335770 containerd[1536]: time="2025-09-09T23:53:18.335703166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 23:53:18.337928 containerd[1536]: time="2025-09-09T23:53:18.337870060Z" level=info msg="CreateContainer within sandbox \"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 23:53:18.348270 containerd[1536]: time="2025-09-09T23:53:18.348224216Z" level=info msg="Container a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:18.359809 containerd[1536]: time="2025-09-09T23:53:18.359005886Z" level=info msg="CreateContainer within sandbox \"1c39635cf7b369c9bd1128b05f6fa4005cbf0870c34663a37050cdc2cd38c347\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb\"" Sep 9 23:53:18.361973 containerd[1536]: time="2025-09-09T23:53:18.361932491Z" level=info msg="StartContainer for \"a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb\"" Sep 9 23:53:18.364150 containerd[1536]: time="2025-09-09T23:53:18.363981706Z" level=info msg="connecting to shim a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb" address="unix:///run/containerd/s/745a9d8eb80d0dbdb8352a316eeb55577b564cea879044f79412c9c7272fcc07" protocol=ttrpc version=3 Sep 9 23:53:18.388669 systemd[1]: Started cri-containerd-a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb.scope - libcontainer container a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb. Sep 9 23:53:18.430387 containerd[1536]: time="2025-09-09T23:53:18.430339667Z" level=info msg="StartContainer for \"a368ac19124168d1f1bf83fef4f0224f6ce4f7db0e64085cd54b6b7b8780a2fb\" returns successfully" Sep 9 23:53:18.747930 kubelet[2645]: I0909 23:53:18.747865 2645 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 23:53:18.747930 kubelet[2645]: I0909 23:53:18.747925 2645 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 23:53:18.756781 containerd[1536]: time="2025-09-09T23:53:18.756341863Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:18.757329 containerd[1536]: time="2025-09-09T23:53:18.757280932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 23:53:18.762203 containerd[1536]: time="2025-09-09T23:53:18.762153313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 426.417467ms" Sep 9 23:53:18.762203 containerd[1536]: time="2025-09-09T23:53:18.762198313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 23:53:18.764915 containerd[1536]: time="2025-09-09T23:53:18.764870161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 23:53:18.767343 containerd[1536]: time="2025-09-09T23:53:18.767293771Z" level=info msg="CreateContainer within sandbox \"b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 23:53:18.779580 containerd[1536]: time="2025-09-09T23:53:18.779528864Z" level=info msg="Container cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:18.796080 containerd[1536]: time="2025-09-09T23:53:18.796019906Z" level=info msg="CreateContainer within sandbox \"b3981f3f69684f07d2fac941c902e29d1e51d7ea5a039c9410d12ceddedaa8be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9\"" Sep 9 23:53:18.796600 containerd[1536]: time="2025-09-09T23:53:18.796557459Z" level=info msg="StartContainer for \"cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9\"" Sep 9 23:53:18.798083 containerd[1536]: time="2025-09-09T23:53:18.798037601Z" level=info msg="connecting to shim cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9" address="unix:///run/containerd/s/a9b59e1e29c80cedab128ede648e16bcfeabc9ee4b7d49bb0e1948e9ae021735" protocol=ttrpc version=3 Sep 9 23:53:18.826706 systemd[1]: Started cri-containerd-cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9.scope - libcontainer container cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9. Sep 9 23:53:18.895051 containerd[1536]: time="2025-09-09T23:53:18.894949875Z" level=info msg="StartContainer for \"cce9bfa8b432edaaf195ebed3d9c93fd73b3bd8a8be034cbb9e567862d2378b9\" returns successfully" Sep 9 23:53:19.045221 kubelet[2645]: I0909 23:53:19.044870 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:53:19.056609 kubelet[2645]: I0909 23:53:19.056457 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vvndz" podStartSLOduration=23.600834148 podStartE2EDuration="28.056407829s" podCreationTimestamp="2025-09-09 23:52:51 +0000 UTC" firstStartedPulling="2025-09-09 23:53:13.879391414 +0000 UTC m=+46.332676224" lastFinishedPulling="2025-09-09 23:53:18.334965095 +0000 UTC m=+50.788249905" observedRunningTime="2025-09-09 23:53:19.055630558 +0000 UTC m=+51.508915368" watchObservedRunningTime="2025-09-09 23:53:19.056407829 +0000 UTC m=+51.509692639" Sep 9 23:53:19.057604 kubelet[2645]: I0909 23:53:19.057203 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-9cghd" podStartSLOduration=32.029747393 podStartE2EDuration="34.057193019s" podCreationTimestamp="2025-09-09 23:52:45 +0000 UTC" firstStartedPulling="2025-09-09 23:53:15.070273852 +0000 UTC m=+47.523558622" lastFinishedPulling="2025-09-09 23:53:17.097719478 +0000 UTC m=+49.551004248" observedRunningTime="2025-09-09 23:53:18.072657332 +0000 UTC m=+50.525942142" watchObservedRunningTime="2025-09-09 23:53:19.057193019 +0000 UTC m=+51.510477829" Sep 9 23:53:19.068058 kubelet[2645]: I0909 23:53:19.067991 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66d8b7fdbc-hdp5z" podStartSLOduration=30.454165932 podStartE2EDuration="34.067974413s" podCreationTimestamp="2025-09-09 23:52:45 +0000 UTC" firstStartedPulling="2025-09-09 23:53:15.150896242 +0000 UTC m=+47.604181012" lastFinishedPulling="2025-09-09 23:53:18.764704683 +0000 UTC m=+51.217989493" observedRunningTime="2025-09-09 23:53:19.067773735 +0000 UTC m=+51.521058545" watchObservedRunningTime="2025-09-09 23:53:19.067974413 +0000 UTC m=+51.521259183" Sep 9 23:53:20.048456 kubelet[2645]: I0909 23:53:20.048160 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:53:20.361904 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:52068.service - OpenSSH per-connection server daemon (10.0.0.1:52068). Sep 9 23:53:20.454088 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 52068 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:20.455845 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:20.464851 systemd-logind[1506]: New session 10 of user core. Sep 9 23:53:20.473705 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 23:53:20.732415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4673639.mount: Deactivated successfully. Sep 9 23:53:20.744452 sshd[5215]: Connection closed by 10.0.0.1 port 52068 Sep 9 23:53:20.744997 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:20.753949 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:52068.service: Deactivated successfully. Sep 9 23:53:20.756159 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 23:53:20.757223 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. Sep 9 23:53:20.760947 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:52084.service - OpenSSH per-connection server daemon (10.0.0.1:52084). Sep 9 23:53:20.762098 systemd-logind[1506]: Removed session 10. Sep 9 23:53:20.833448 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 52084 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:20.834952 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:20.840442 systemd-logind[1506]: New session 11 of user core. Sep 9 23:53:20.845637 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 23:53:21.114841 sshd[5238]: Connection closed by 10.0.0.1 port 52084 Sep 9 23:53:21.114422 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:21.123511 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:52084.service: Deactivated successfully. Sep 9 23:53:21.125803 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 23:53:21.129244 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. Sep 9 23:53:21.131919 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:52096.service - OpenSSH per-connection server daemon (10.0.0.1:52096). Sep 9 23:53:21.135369 systemd-logind[1506]: Removed session 11. Sep 9 23:53:21.192714 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 52096 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:21.194354 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:21.199458 systemd-logind[1506]: New session 12 of user core. Sep 9 23:53:21.209872 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 23:53:21.346002 containerd[1536]: time="2025-09-09T23:53:21.345943610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:21.348198 containerd[1536]: time="2025-09-09T23:53:21.348154786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 23:53:21.350216 containerd[1536]: time="2025-09-09T23:53:21.350015085Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:21.356457 containerd[1536]: time="2025-09-09T23:53:21.356381334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:53:21.360731 containerd[1536]: time="2025-09-09T23:53:21.360677686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.595766247s" Sep 9 23:53:21.361546 containerd[1536]: time="2025-09-09T23:53:21.361512597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 23:53:21.365757 containerd[1536]: time="2025-09-09T23:53:21.365659991Z" level=info msg="CreateContainer within sandbox \"daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 23:53:21.376143 containerd[1536]: time="2025-09-09T23:53:21.373687142Z" level=info msg="Container da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc: CDI devices from CRI Config.CDIDevices: []" Sep 9 23:53:21.383114 containerd[1536]: time="2025-09-09T23:53:21.383064077Z" level=info msg="CreateContainer within sandbox \"daaffbaeadd491f5dc65451598b971f24357d4c05a7cf1877a202b5dd2504696\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\"" Sep 9 23:53:21.383855 containerd[1536]: time="2025-09-09T23:53:21.383673790Z" level=info msg="StartContainer for \"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\"" Sep 9 23:53:21.386011 containerd[1536]: time="2025-09-09T23:53:21.385968085Z" level=info msg="connecting to shim da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc" address="unix:///run/containerd/s/52dadd61bc141b5da1bc2794c9dc3c5d9c149f128a2df59ee3b83e7b2b097c5d" protocol=ttrpc version=3 Sep 9 23:53:21.411674 systemd[1]: Started cri-containerd-da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc.scope - libcontainer container da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc. Sep 9 23:53:21.465270 sshd[5252]: Connection closed by 10.0.0.1 port 52096 Sep 9 23:53:21.471768 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:21.475541 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:52096.service: Deactivated successfully. Sep 9 23:53:21.479163 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 23:53:21.480031 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. Sep 9 23:53:21.481523 systemd-logind[1506]: Removed session 12. Sep 9 23:53:21.487368 containerd[1536]: time="2025-09-09T23:53:21.487297877Z" level=info msg="StartContainer for \"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" returns successfully" Sep 9 23:53:22.090689 kubelet[2645]: I0909 23:53:22.090618 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-4nw59" podStartSLOduration=25.926293706 podStartE2EDuration="32.090600627s" podCreationTimestamp="2025-09-09 23:52:50 +0000 UTC" firstStartedPulling="2025-09-09 23:53:15.198710779 +0000 UTC m=+47.651995589" lastFinishedPulling="2025-09-09 23:53:21.3630177 +0000 UTC m=+53.816302510" observedRunningTime="2025-09-09 23:53:22.090036793 +0000 UTC m=+54.543321603" watchObservedRunningTime="2025-09-09 23:53:22.090600627 +0000 UTC m=+54.543885437" Sep 9 23:53:22.163379 containerd[1536]: time="2025-09-09T23:53:22.163334158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" id:\"6fd519f9bac712b97bb299b6555e6357c63b61427e4e0a8a73f355e1be6dee2b\" pid:5319 exit_status:1 exited_at:{seconds:1757462002 nanos:163002481}" Sep 9 23:53:23.163480 containerd[1536]: time="2025-09-09T23:53:23.163376314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" id:\"6586506bc45a032b4053197a166422c449d56ab9456145c0ab4203e54958c143\" pid:5344 exit_status:1 exited_at:{seconds:1757462003 nanos:163071357}" Sep 9 23:53:23.735168 kubelet[2645]: I0909 23:53:23.733951 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:53:24.142476 kubelet[2645]: I0909 23:53:24.141854 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:53:26.486255 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:52102.service - OpenSSH per-connection server daemon (10.0.0.1:52102). Sep 9 23:53:26.539784 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 52102 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:26.541129 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:26.546372 systemd-logind[1506]: New session 13 of user core. Sep 9 23:53:26.559659 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 23:53:26.721735 sshd[5364]: Connection closed by 10.0.0.1 port 52102 Sep 9 23:53:26.722048 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:26.726171 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:52102.service: Deactivated successfully. Sep 9 23:53:26.729121 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 23:53:26.731210 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. Sep 9 23:53:26.733682 systemd-logind[1506]: Removed session 13. Sep 9 23:53:28.681478 containerd[1536]: time="2025-09-09T23:53:28.681316674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\" id:\"d5799dc16812fadcdbfc3b0c65545636045c3700293bf1fddb2879f3fd8ad613\" pid:5393 exited_at:{seconds:1757462008 nanos:681035796}" Sep 9 23:53:29.007498 containerd[1536]: time="2025-09-09T23:53:29.007335275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\" id:\"6d5230ad644372ad5d40878b38fea9da885b15b37f2c8faa9a9863b6bd2111a8\" pid:5436 exited_at:{seconds:1757462009 nanos:5483372}" Sep 9 23:53:29.019374 containerd[1536]: time="2025-09-09T23:53:29.019318406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" id:\"309705bf7c9312dca89b9459be490c4d9bcd95cfbf8686b0bf5825bdd636dad0\" pid:5417 exit_status:1 exited_at:{seconds:1757462009 nanos:18070577}" Sep 9 23:53:31.736754 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:41482.service - OpenSSH per-connection server daemon (10.0.0.1:41482). Sep 9 23:53:31.799662 sshd[5458]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:31.801105 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:31.806120 systemd-logind[1506]: New session 14 of user core. Sep 9 23:53:31.811632 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 23:53:31.970317 sshd[5461]: Connection closed by 10.0.0.1 port 41482 Sep 9 23:53:31.970715 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:31.974625 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:41482.service: Deactivated successfully. Sep 9 23:53:31.976724 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 23:53:31.977715 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. Sep 9 23:53:31.978948 systemd-logind[1506]: Removed session 14. Sep 9 23:53:36.993985 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:41488.service - OpenSSH per-connection server daemon (10.0.0.1:41488). Sep 9 23:53:37.064409 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 41488 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:37.066243 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:37.075746 systemd-logind[1506]: New session 15 of user core. Sep 9 23:53:37.085698 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 23:53:37.249124 sshd[5481]: Connection closed by 10.0.0.1 port 41488 Sep 9 23:53:37.250847 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:37.255302 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:41488.service: Deactivated successfully. Sep 9 23:53:37.257498 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 23:53:37.258498 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. Sep 9 23:53:37.260325 systemd-logind[1506]: Removed session 15. Sep 9 23:53:42.264121 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:48372.service - OpenSSH per-connection server daemon (10.0.0.1:48372). Sep 9 23:53:42.310315 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 48372 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:42.311635 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:42.317124 systemd-logind[1506]: New session 16 of user core. Sep 9 23:53:42.320814 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 23:53:42.449063 sshd[5498]: Connection closed by 10.0.0.1 port 48372 Sep 9 23:53:42.447935 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:42.459787 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:48372.service: Deactivated successfully. Sep 9 23:53:42.463681 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 23:53:42.464827 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. Sep 9 23:53:42.469313 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:48378.service - OpenSSH per-connection server daemon (10.0.0.1:48378). Sep 9 23:53:42.470803 systemd-logind[1506]: Removed session 16. Sep 9 23:53:42.521560 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 48378 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:42.522781 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:42.527526 systemd-logind[1506]: New session 17 of user core. Sep 9 23:53:42.533601 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 23:53:42.735844 sshd[5515]: Connection closed by 10.0.0.1 port 48378 Sep 9 23:53:42.736549 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:42.746562 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:48378.service: Deactivated successfully. Sep 9 23:53:42.748129 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 23:53:42.749277 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. Sep 9 23:53:42.751074 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:48386.service - OpenSSH per-connection server daemon (10.0.0.1:48386). Sep 9 23:53:42.753519 systemd-logind[1506]: Removed session 17. Sep 9 23:53:42.808605 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 48386 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:42.810154 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:42.814681 systemd-logind[1506]: New session 18 of user core. Sep 9 23:53:42.827651 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 23:53:44.414814 sshd[5530]: Connection closed by 10.0.0.1 port 48386 Sep 9 23:53:44.415407 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:44.428139 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:48386.service: Deactivated successfully. Sep 9 23:53:44.433110 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 23:53:44.433355 systemd[1]: session-18.scope: Consumed 529ms CPU time, 75.6M memory peak. Sep 9 23:53:44.435758 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. Sep 9 23:53:44.441172 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:48390.service - OpenSSH per-connection server daemon (10.0.0.1:48390). Sep 9 23:53:44.444514 systemd-logind[1506]: Removed session 18. Sep 9 23:53:44.506976 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 48390 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:44.508202 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:44.512500 systemd-logind[1506]: New session 19 of user core. Sep 9 23:53:44.522621 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 23:53:44.819177 sshd[5553]: Connection closed by 10.0.0.1 port 48390 Sep 9 23:53:44.819815 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:44.829328 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:48390.service: Deactivated successfully. Sep 9 23:53:44.832053 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 23:53:44.834967 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. Sep 9 23:53:44.841534 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:48406.service - OpenSSH per-connection server daemon (10.0.0.1:48406). Sep 9 23:53:44.843613 systemd-logind[1506]: Removed session 19. Sep 9 23:53:44.893987 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 48406 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:44.896297 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:44.900338 systemd-logind[1506]: New session 20 of user core. Sep 9 23:53:44.905606 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 23:53:45.084939 sshd[5568]: Connection closed by 10.0.0.1 port 48406 Sep 9 23:53:45.085307 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:45.088825 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:48406.service: Deactivated successfully. Sep 9 23:53:45.090643 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 23:53:45.091310 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. Sep 9 23:53:45.092683 systemd-logind[1506]: Removed session 20. Sep 9 23:53:50.102271 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:56022.service - OpenSSH per-connection server daemon (10.0.0.1:56022). Sep 9 23:53:50.158378 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 56022 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:50.159876 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:50.167868 systemd-logind[1506]: New session 21 of user core. Sep 9 23:53:50.171648 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 23:53:50.304283 sshd[5591]: Connection closed by 10.0.0.1 port 56022 Sep 9 23:53:50.304370 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:50.308262 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:56022.service: Deactivated successfully. Sep 9 23:53:50.311395 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 23:53:50.312261 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. Sep 9 23:53:50.314316 systemd-logind[1506]: Removed session 21. Sep 9 23:53:55.318099 systemd[1]: Started sshd@21-10.0.0.91:22-10.0.0.1:56038.service - OpenSSH per-connection server daemon (10.0.0.1:56038). Sep 9 23:53:55.375464 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 56038 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:53:55.375984 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:53:55.384248 systemd-logind[1506]: New session 22 of user core. Sep 9 23:53:55.393634 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 23:53:55.524671 sshd[5613]: Connection closed by 10.0.0.1 port 56038 Sep 9 23:53:55.524989 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Sep 9 23:53:55.529880 systemd[1]: sshd@21-10.0.0.91:22-10.0.0.1:56038.service: Deactivated successfully. Sep 9 23:53:55.532071 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 23:53:55.533867 systemd-logind[1506]: Session 22 logged out. Waiting for processes to exit. Sep 9 23:53:55.534880 systemd-logind[1506]: Removed session 22. Sep 9 23:53:58.684348 containerd[1536]: time="2025-09-09T23:53:58.684239104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f526193a76d671df9b8f1d347a3d663c72acf35c0a44206465c56ef9f1333e4\" id:\"20856a3751b5e59b6e2cbb4e93942dd29fe3cea981004be651be263901c55b2c\" pid:5638 exited_at:{seconds:1757462038 nanos:683895746}" Sep 9 23:53:58.945166 containerd[1536]: time="2025-09-09T23:53:58.945016714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66b87419558bce3031782dbb2181ad442ad6302146caae242f9b3fb4676a3df8\" id:\"300cd8cfcf9063a391fb30b7eb02c20d326e14a2007c161de66763212bc8b2dd\" pid:5673 exited_at:{seconds:1757462038 nanos:944653996}" Sep 9 23:53:58.978274 containerd[1536]: time="2025-09-09T23:53:58.978235312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" id:\"9eae3e3fd2c34c3906df2e237e5d2feeb7d0d8a30998ec3354a2795356cb3921\" pid:5682 exited_at:{seconds:1757462038 nanos:977917994}" Sep 9 23:53:59.403952 containerd[1536]: time="2025-09-09T23:53:59.403829875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da4ae2a1cc32b7a12d9c10087e3ebf2c3b1ed92866af3d1c196a6ef84760cabc\" id:\"6efde9e1ac1cb5c25cfbdee215da2ae78dcebb75f1aab662b82dce8090fdd86e\" pid:5713 exited_at:{seconds:1757462039 nanos:403547277}" Sep 9 23:54:00.541744 systemd[1]: Started sshd@22-10.0.0.91:22-10.0.0.1:34180.service - OpenSSH per-connection server daemon (10.0.0.1:34180). Sep 9 23:54:00.600854 sshd[5725]: Accepted publickey for core from 10.0.0.1 port 34180 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:54:00.602574 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:54:00.608500 systemd-logind[1506]: New session 23 of user core. Sep 9 23:54:00.614620 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 23:54:00.753559 sshd[5728]: Connection closed by 10.0.0.1 port 34180 Sep 9 23:54:00.755271 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Sep 9 23:54:00.759318 systemd[1]: sshd@22-10.0.0.91:22-10.0.0.1:34180.service: Deactivated successfully. Sep 9 23:54:00.769633 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 23:54:00.770821 systemd-logind[1506]: Session 23 logged out. Waiting for processes to exit. Sep 9 23:54:00.774624 systemd-logind[1506]: Removed session 23. Sep 9 23:54:05.767118 systemd[1]: Started sshd@23-10.0.0.91:22-10.0.0.1:34188.service - OpenSSH per-connection server daemon (10.0.0.1:34188). Sep 9 23:54:05.813045 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 34188 ssh2: RSA SHA256:ShEbAFDiud3N347dMM7a5FvhCCVidjBtKvjtghHDp6o Sep 9 23:54:05.814426 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:54:05.819027 systemd-logind[1506]: New session 24 of user core. Sep 9 23:54:05.825633 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 23:54:05.986765 sshd[5748]: Connection closed by 10.0.0.1 port 34188 Sep 9 23:54:05.987726 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Sep 9 23:54:05.992426 systemd[1]: sshd@23-10.0.0.91:22-10.0.0.1:34188.service: Deactivated successfully. Sep 9 23:54:05.994782 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 23:54:05.996226 systemd-logind[1506]: Session 24 logged out. Waiting for processes to exit. Sep 9 23:54:05.998967 systemd-logind[1506]: Removed session 24.