Sep 10 23:43:47.766026 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:43:47.766046 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:43:47.766056 kernel: KASLR enabled Sep 10 23:43:47.766062 kernel: efi: EFI v2.7 by EDK II Sep 10 23:43:47.766067 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 10 23:43:47.766073 kernel: random: crng init done Sep 10 23:43:47.766080 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 10 23:43:47.766085 kernel: secureboot: Secure boot enabled Sep 10 23:43:47.766091 kernel: ACPI: Early table checksum verification disabled Sep 10 23:43:47.766098 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 10 23:43:47.766104 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:43:47.766109 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766115 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766121 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766128 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766135 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766141 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766147 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766153 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766159 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:43:47.766165 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:43:47.766171 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:43:47.766177 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:43:47.766183 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 10 23:43:47.766189 kernel: Zone ranges: Sep 10 23:43:47.766196 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:43:47.766202 kernel: DMA32 empty Sep 10 23:43:47.766208 kernel: Normal empty Sep 10 23:43:47.766214 kernel: Device empty Sep 10 23:43:47.766219 kernel: Movable zone start for each node Sep 10 23:43:47.766225 kernel: Early memory node ranges Sep 10 23:43:47.766231 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 10 23:43:47.766237 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 10 23:43:47.766243 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 10 23:43:47.766249 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 10 23:43:47.766255 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 10 23:43:47.766261 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 10 23:43:47.766268 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 10 23:43:47.766274 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 10 23:43:47.766280 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:43:47.766289 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:43:47.766295 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:43:47.766302 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 10 23:43:47.766308 kernel: psci: probing for conduit method from ACPI. Sep 10 23:43:47.766316 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:43:47.766322 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:43:47.766329 kernel: psci: Trusted OS migration not required Sep 10 23:43:47.766335 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:43:47.766341 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:43:47.766347 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:43:47.766354 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:43:47.766360 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:43:47.766366 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:43:47.766374 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:43:47.766380 kernel: CPU features: detected: Spectre-v4 Sep 10 23:43:47.766386 kernel: CPU features: detected: Spectre-BHB Sep 10 23:43:47.766393 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:43:47.766399 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:43:47.766406 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:43:47.766412 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:43:47.766418 kernel: alternatives: applying boot alternatives Sep 10 23:43:47.766426 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:43:47.766433 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:43:47.766439 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:43:47.766447 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:43:47.766454 kernel: Fallback order for Node 0: 0 Sep 10 23:43:47.766460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:43:47.766466 kernel: Policy zone: DMA Sep 10 23:43:47.766473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:43:47.766479 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:43:47.766486 kernel: software IO TLB: area num 4. Sep 10 23:43:47.766492 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:43:47.766499 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 10 23:43:47.766505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:43:47.766512 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:43:47.766519 kernel: rcu: RCU event tracing is enabled. Sep 10 23:43:47.766527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:43:47.766534 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:43:47.766540 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:43:47.766547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:43:47.766556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:43:47.766563 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:43:47.766570 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:43:47.766576 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:43:47.766583 kernel: GICv3: 256 SPIs implemented Sep 10 23:43:47.766589 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:43:47.766617 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:43:47.766627 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:43:47.766633 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:43:47.766640 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:43:47.766647 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:43:47.766653 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:43:47.766660 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:43:47.766667 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:43:47.766673 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:43:47.766680 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:43:47.766686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:43:47.766696 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:43:47.766702 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:43:47.766711 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:43:47.766720 kernel: arm-pv: using stolen time PV Sep 10 23:43:47.766730 kernel: Console: colour dummy device 80x25 Sep 10 23:43:47.766737 kernel: ACPI: Core revision 20240827 Sep 10 23:43:47.766744 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:43:47.766750 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:43:47.766757 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:43:47.766764 kernel: landlock: Up and running. Sep 10 23:43:47.766770 kernel: SELinux: Initializing. Sep 10 23:43:47.766779 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:43:47.766787 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:43:47.766794 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:43:47.766801 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:43:47.766808 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:43:47.766816 kernel: Remapping and enabling EFI services. Sep 10 23:43:47.766823 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:43:47.766830 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:43:47.766837 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:43:47.766846 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:43:47.766859 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:43:47.766868 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:43:47.766877 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:43:47.766884 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:43:47.766891 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:43:47.766898 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:43:47.766905 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:43:47.766912 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:43:47.766920 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:43:47.766940 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:43:47.766947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:43:47.766954 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:43:47.766961 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:43:47.766968 kernel: SMP: Total of 4 processors activated. Sep 10 23:43:47.766975 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:43:47.766982 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:43:47.766989 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:43:47.766997 kernel: CPU features: detected: Common not Private translations Sep 10 23:43:47.767004 kernel: CPU features: detected: CRC32 instructions Sep 10 23:43:47.767011 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:43:47.767018 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:43:47.767025 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:43:47.767031 kernel: CPU features: detected: Privileged Access Never Sep 10 23:43:47.767038 kernel: CPU features: detected: RAS Extension Support Sep 10 23:43:47.767045 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:43:47.767052 kernel: alternatives: applying system-wide alternatives Sep 10 23:43:47.767060 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:43:47.767068 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 10 23:43:47.767075 kernel: devtmpfs: initialized Sep 10 23:43:47.767082 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:43:47.767089 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:43:47.767096 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:43:47.767102 kernel: 0 pages in range for non-PLT usage Sep 10 23:43:47.767109 kernel: 508560 pages in range for PLT usage Sep 10 23:43:47.767116 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:43:47.767124 kernel: SMBIOS 3.0.0 present. Sep 10 23:43:47.767131 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:43:47.767138 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:43:47.767145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:43:47.767152 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:43:47.767159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:43:47.767166 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:43:47.767172 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:43:47.767179 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 10 23:43:47.767188 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:43:47.767194 kernel: cpuidle: using governor menu Sep 10 23:43:47.767201 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:43:47.767208 kernel: ASID allocator initialised with 32768 entries Sep 10 23:43:47.767215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:43:47.767222 kernel: Serial: AMBA PL011 UART driver Sep 10 23:43:47.767229 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:43:47.767236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:43:47.767243 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:43:47.767251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:43:47.767257 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:43:47.767264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:43:47.767271 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:43:47.767278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:43:47.767285 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:43:47.767291 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:43:47.767298 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:43:47.767305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:43:47.767313 kernel: ACPI: Interpreter enabled Sep 10 23:43:47.767320 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:43:47.767327 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:43:47.767334 kernel: ACPI: CPU0 has been hot-added Sep 10 23:43:47.767340 kernel: ACPI: CPU1 has been hot-added Sep 10 23:43:47.767347 kernel: ACPI: CPU2 has been hot-added Sep 10 23:43:47.767354 kernel: ACPI: CPU3 has been hot-added Sep 10 23:43:47.767361 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:43:47.767368 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:43:47.767376 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:43:47.767501 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:43:47.767566 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:43:47.767689 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:43:47.767754 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:43:47.767813 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:43:47.767822 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:43:47.767832 kernel: PCI host bridge to bus 0000:00 Sep 10 23:43:47.767897 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:43:47.767951 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:43:47.768003 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:43:47.768054 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:43:47.768132 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:43:47.768203 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:43:47.768287 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:43:47.768355 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:43:47.768414 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:43:47.768472 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:43:47.768529 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:43:47.768588 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:43:47.768662 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:43:47.768719 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:43:47.768771 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:43:47.768780 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:43:47.768787 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:43:47.768794 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:43:47.768800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:43:47.768807 kernel: iommu: Default domain type: Translated Sep 10 23:43:47.768814 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:43:47.768822 kernel: efivars: Registered efivars operations Sep 10 23:43:47.768829 kernel: vgaarb: loaded Sep 10 23:43:47.768836 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:43:47.768843 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:43:47.768850 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:43:47.768856 kernel: pnp: PnP ACPI init Sep 10 23:43:47.768920 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:43:47.768930 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:43:47.768939 kernel: NET: Registered PF_INET protocol family Sep 10 23:43:47.768946 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:43:47.768953 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:43:47.768959 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:43:47.768966 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:43:47.768973 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:43:47.768980 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:43:47.768987 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:43:47.768994 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:43:47.769003 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:43:47.769009 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:43:47.769016 kernel: kvm [1]: HYP mode not available Sep 10 23:43:47.769023 kernel: Initialise system trusted keyrings Sep 10 23:43:47.769030 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:43:47.769037 kernel: Key type asymmetric registered Sep 10 23:43:47.769044 kernel: Asymmetric key parser 'x509' registered Sep 10 23:43:47.769050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:43:47.769057 kernel: io scheduler mq-deadline registered Sep 10 23:43:47.769065 kernel: io scheduler kyber registered Sep 10 23:43:47.769072 kernel: io scheduler bfq registered Sep 10 23:43:47.769079 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:43:47.769086 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:43:47.769093 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:43:47.769153 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:43:47.769162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:43:47.769169 kernel: thunder_xcv, ver 1.0 Sep 10 23:43:47.769176 kernel: thunder_bgx, ver 1.0 Sep 10 23:43:47.769185 kernel: nicpf, ver 1.0 Sep 10 23:43:47.769191 kernel: nicvf, ver 1.0 Sep 10 23:43:47.769259 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:43:47.769315 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:43:47 UTC (1757547827) Sep 10 23:43:47.769324 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:43:47.769332 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:43:47.769339 kernel: watchdog: NMI not fully supported Sep 10 23:43:47.769345 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:43:47.769354 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:43:47.769361 kernel: Segment Routing with IPv6 Sep 10 23:43:47.769368 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:43:47.769375 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:43:47.769382 kernel: Key type dns_resolver registered Sep 10 23:43:47.769388 kernel: registered taskstats version 1 Sep 10 23:43:47.769395 kernel: Loading compiled-in X.509 certificates Sep 10 23:43:47.769402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:43:47.769409 kernel: Demotion targets for Node 0: null Sep 10 23:43:47.769417 kernel: Key type .fscrypt registered Sep 10 23:43:47.769424 kernel: Key type fscrypt-provisioning registered Sep 10 23:43:47.769431 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:43:47.769438 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:43:47.769444 kernel: ima: No architecture policies found Sep 10 23:43:47.769451 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:43:47.769458 kernel: clk: Disabling unused clocks Sep 10 23:43:47.769465 kernel: PM: genpd: Disabling unused power domains Sep 10 23:43:47.769472 kernel: Warning: unable to open an initial console. Sep 10 23:43:47.769480 kernel: Freeing unused kernel memory: 38976K Sep 10 23:43:47.769487 kernel: Run /init as init process Sep 10 23:43:47.769493 kernel: with arguments: Sep 10 23:43:47.769500 kernel: /init Sep 10 23:43:47.769506 kernel: with environment: Sep 10 23:43:47.769513 kernel: HOME=/ Sep 10 23:43:47.769520 kernel: TERM=linux Sep 10 23:43:47.769526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:43:47.769534 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:43:47.769545 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:43:47.769553 systemd[1]: Detected virtualization kvm. Sep 10 23:43:47.769560 systemd[1]: Detected architecture arm64. Sep 10 23:43:47.769568 systemd[1]: Running in initrd. Sep 10 23:43:47.769575 systemd[1]: No hostname configured, using default hostname. Sep 10 23:43:47.769582 systemd[1]: Hostname set to . Sep 10 23:43:47.769589 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:43:47.769613 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:43:47.769622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:43:47.769629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:43:47.769637 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:43:47.769648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:43:47.769656 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:43:47.769664 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:43:47.769674 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:43:47.769682 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:43:47.769689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:43:47.769697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:43:47.769704 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:43:47.769712 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:43:47.769719 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:43:47.769726 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:43:47.769735 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:43:47.769742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:43:47.769750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:43:47.769757 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:43:47.769765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:43:47.769772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:43:47.769779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:43:47.769787 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:43:47.769794 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:43:47.769803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:43:47.769810 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:43:47.769818 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:43:47.769825 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:43:47.769833 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:43:47.769840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:43:47.769848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:43:47.769855 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:43:47.769864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:43:47.769871 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:43:47.769879 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:43:47.769887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:43:47.769909 systemd-journald[245]: Collecting audit messages is disabled. Sep 10 23:43:47.769928 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:43:47.769936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:43:47.769944 systemd-journald[245]: Journal started Sep 10 23:43:47.769964 systemd-journald[245]: Runtime Journal (/run/log/journal/4a587fbf951e472686b2ad8e5ea7cbd3) is 6M, max 48.5M, 42.4M free. Sep 10 23:43:47.752106 systemd-modules-load[246]: Inserted module 'overlay' Sep 10 23:43:47.772652 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:43:47.772672 kernel: Bridge firewalling registered Sep 10 23:43:47.773000 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 10 23:43:47.777793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:43:47.778762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:43:47.783239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:43:47.784537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:43:47.788261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:43:47.796628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:43:47.797660 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:43:47.797993 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:43:47.800315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:43:47.802322 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:43:47.805046 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:43:47.806972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:43:47.830110 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:43:47.843902 systemd-resolved[290]: Positive Trust Anchors: Sep 10 23:43:47.843917 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:43:47.843950 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:43:47.848700 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 10 23:43:47.849656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:43:47.852757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:43:47.921630 kernel: SCSI subsystem initialized Sep 10 23:43:47.925617 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:43:47.933641 kernel: iscsi: registered transport (tcp) Sep 10 23:43:47.945883 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:43:47.945917 kernel: QLogic iSCSI HBA Driver Sep 10 23:43:47.963512 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:43:47.979658 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:43:47.982214 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:43:48.032570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:43:48.034799 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:43:48.102632 kernel: raid6: neonx8 gen() 15621 MB/s Sep 10 23:43:48.119622 kernel: raid6: neonx4 gen() 15782 MB/s Sep 10 23:43:48.136627 kernel: raid6: neonx2 gen() 13215 MB/s Sep 10 23:43:48.153619 kernel: raid6: neonx1 gen() 10438 MB/s Sep 10 23:43:48.170616 kernel: raid6: int64x8 gen() 6895 MB/s Sep 10 23:43:48.187618 kernel: raid6: int64x4 gen() 7344 MB/s Sep 10 23:43:48.204617 kernel: raid6: int64x2 gen() 6096 MB/s Sep 10 23:43:48.221620 kernel: raid6: int64x1 gen() 5049 MB/s Sep 10 23:43:48.221634 kernel: raid6: using algorithm neonx4 gen() 15782 MB/s Sep 10 23:43:48.238628 kernel: raid6: .... xor() 12325 MB/s, rmw enabled Sep 10 23:43:48.238647 kernel: raid6: using neon recovery algorithm Sep 10 23:43:48.244866 kernel: xor: measuring software checksum speed Sep 10 23:43:48.244891 kernel: 8regs : 21601 MB/sec Sep 10 23:43:48.244903 kernel: 32regs : 21704 MB/sec Sep 10 23:43:48.245989 kernel: arm64_neon : 28089 MB/sec Sep 10 23:43:48.246005 kernel: xor: using function: arm64_neon (28089 MB/sec) Sep 10 23:43:48.298637 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:43:48.309103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:43:48.311753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:43:48.362743 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 23:43:48.368900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:43:48.371160 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:43:48.400928 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 10 23:43:48.428678 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:43:48.430984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:43:48.495644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:43:48.498762 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:43:48.574628 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:43:48.574844 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:43:48.577231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:43:48.577343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:43:48.583707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:43:48.585674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:43:48.590765 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:43:48.590807 kernel: GPT:9289727 != 19775487 Sep 10 23:43:48.590817 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:43:48.591684 kernel: GPT:9289727 != 19775487 Sep 10 23:43:48.591708 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:43:48.592624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:43:48.616492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:43:48.622973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:43:48.630933 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:43:48.633103 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:43:48.644509 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:43:48.645626 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:43:48.653864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:43:48.654836 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:43:48.656550 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:43:48.658323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:43:48.660676 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:43:48.662358 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:43:48.684026 disk-uuid[591]: Primary Header is updated. Sep 10 23:43:48.684026 disk-uuid[591]: Secondary Entries is updated. Sep 10 23:43:48.684026 disk-uuid[591]: Secondary Header is updated. Sep 10 23:43:48.688447 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:43:48.690265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:43:49.697570 disk-uuid[594]: The operation has completed successfully. Sep 10 23:43:49.698499 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:43:49.728853 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:43:49.728954 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:43:49.753261 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:43:49.774450 sh[612]: Success Sep 10 23:43:49.786972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:43:49.787015 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:43:49.787857 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:43:49.795642 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:43:49.825079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:43:49.827076 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:43:49.841341 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:43:49.847628 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 10 23:43:49.849794 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:43:49.849837 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:43:49.853709 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:43:49.853754 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:43:49.854748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:43:49.855875 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:43:49.856857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:43:49.857691 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:43:49.860457 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:43:49.894858 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 10 23:43:49.894912 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:43:49.895644 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:43:49.897954 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:43:49.898005 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:43:49.902634 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:43:49.904659 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:43:49.906848 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:43:49.983638 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:43:49.986280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:43:50.005431 ignition[702]: Ignition 2.21.0 Sep 10 23:43:50.005446 ignition[702]: Stage: fetch-offline Sep 10 23:43:50.005478 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:50.005487 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:50.005675 ignition[702]: parsed url from cmdline: "" Sep 10 23:43:50.005678 ignition[702]: no config URL provided Sep 10 23:43:50.005683 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:43:50.005689 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:43:50.005710 ignition[702]: op(1): [started] loading QEMU firmware config module Sep 10 23:43:50.005714 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:43:50.012804 ignition[702]: op(1): [finished] loading QEMU firmware config module Sep 10 23:43:50.025011 systemd-networkd[803]: lo: Link UP Sep 10 23:43:50.025024 systemd-networkd[803]: lo: Gained carrier Sep 10 23:43:50.025720 systemd-networkd[803]: Enumeration completed Sep 10 23:43:50.025817 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:43:50.026744 systemd[1]: Reached target network.target - Network. Sep 10 23:43:50.028147 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:43:50.028151 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:43:50.029117 systemd-networkd[803]: eth0: Link UP Sep 10 23:43:50.029204 systemd-networkd[803]: eth0: Gained carrier Sep 10 23:43:50.029214 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:43:50.063652 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:43:50.069701 ignition[702]: parsing config with SHA512: 9ebd53357c65db6d7d394afc6e9e2513313a9b286c531717a2b5538cd1c2b40568ff04606db52ce98b3f9d048aeb35d3f48d99eba9c8045813708d6b212d257b Sep 10 23:43:50.074326 unknown[702]: fetched base config from "system" Sep 10 23:43:50.074335 unknown[702]: fetched user config from "qemu" Sep 10 23:43:50.074688 ignition[702]: fetch-offline: fetch-offline passed Sep 10 23:43:50.076463 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:43:50.074744 ignition[702]: Ignition finished successfully Sep 10 23:43:50.077777 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:43:50.078530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:43:50.106802 ignition[811]: Ignition 2.21.0 Sep 10 23:43:50.106819 ignition[811]: Stage: kargs Sep 10 23:43:50.106968 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:50.106977 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:50.109065 ignition[811]: kargs: kargs passed Sep 10 23:43:50.109131 ignition[811]: Ignition finished successfully Sep 10 23:43:50.110896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:43:50.113759 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:43:50.136794 ignition[819]: Ignition 2.21.0 Sep 10 23:43:50.136807 ignition[819]: Stage: disks Sep 10 23:43:50.136952 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:50.136961 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:50.138402 ignition[819]: disks: disks passed Sep 10 23:43:50.138465 ignition[819]: Ignition finished successfully Sep 10 23:43:50.142156 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:43:50.143581 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:43:50.145770 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:43:50.147652 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:43:50.149321 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:43:50.150936 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:43:50.153400 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:43:50.181315 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:43:50.187235 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:43:50.189521 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:43:50.252612 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:43:50.253135 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:43:50.254271 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:43:50.257270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:43:50.259353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:43:50.260301 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:43:50.260351 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:43:50.260378 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:43:50.278381 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:43:50.280558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:43:50.286356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 10 23:43:50.286412 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:43:50.287637 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:43:50.290653 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:43:50.290714 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:43:50.292676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:43:50.324448 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:43:50.328193 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:43:50.332800 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:43:50.336358 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:43:50.422740 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:43:50.424678 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:43:50.426351 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:43:50.440615 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:43:50.456770 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:43:50.470207 ignition[952]: INFO : Ignition 2.21.0 Sep 10 23:43:50.470207 ignition[952]: INFO : Stage: mount Sep 10 23:43:50.471621 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:50.471621 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:50.471621 ignition[952]: INFO : mount: mount passed Sep 10 23:43:50.471621 ignition[952]: INFO : Ignition finished successfully Sep 10 23:43:50.473146 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:43:50.475030 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:43:50.847473 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:43:50.849074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:43:50.880610 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 10 23:43:50.882340 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:43:50.882364 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:43:50.885983 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:43:50.886037 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:43:50.887877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:43:50.926960 ignition[982]: INFO : Ignition 2.21.0 Sep 10 23:43:50.927882 ignition[982]: INFO : Stage: files Sep 10 23:43:50.928458 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:50.928458 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:50.930755 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:43:50.932963 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:43:50.932963 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:43:50.935267 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:43:50.936485 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:43:50.936485 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:43:50.935882 unknown[982]: wrote ssh authorized keys file for user: core Sep 10 23:43:50.939781 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:43:50.939781 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 23:43:50.978135 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:43:51.451531 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:43:51.453176 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:43:51.463992 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 23:43:51.733767 systemd-networkd[803]: eth0: Gained IPv6LL Sep 10 23:43:51.860766 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:43:52.325700 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:43:52.327549 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:43:52.328784 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:43:52.331713 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:43:52.331713 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:43:52.331713 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:43:52.335385 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:43:52.335385 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:43:52.335385 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:43:52.335385 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:43:52.348691 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:43:52.352608 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:43:52.354433 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:43:52.354433 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:43:52.354433 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:43:52.354433 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:43:52.354433 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:43:52.354433 ignition[982]: INFO : files: files passed Sep 10 23:43:52.361980 ignition[982]: INFO : Ignition finished successfully Sep 10 23:43:52.357645 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:43:52.362766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:43:52.365631 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:43:52.378419 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:43:52.378525 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:43:52.381807 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:43:52.382904 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:43:52.382904 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:43:52.385712 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:43:52.385392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:43:52.386739 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:43:52.389874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:43:52.424018 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:43:52.424692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:43:52.425926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:43:52.430063 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:43:52.433688 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:43:52.435007 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:43:52.460021 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:43:52.462470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:43:52.496098 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:43:52.497103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:43:52.498591 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:43:52.500010 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:43:52.500149 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:43:52.502147 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:43:52.503638 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:43:52.504935 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:43:52.506273 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:43:52.507722 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:43:52.509475 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:43:52.510995 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:43:52.512414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:43:52.513940 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:43:52.515486 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:43:52.516964 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:43:52.518144 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:43:52.518275 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:43:52.520180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:43:52.521618 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:43:52.523167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:43:52.526697 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:43:52.527697 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:43:52.527826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:43:52.530141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:43:52.530259 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:43:52.531748 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:43:52.532949 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:43:52.533100 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:43:52.534637 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:43:52.535801 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:43:52.537143 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:43:52.537230 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:43:52.538780 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:43:52.538853 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:43:52.540290 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:43:52.540406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:43:52.541686 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:43:52.541788 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:43:52.543823 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:43:52.545634 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:43:52.546888 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:43:52.547025 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:43:52.548619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:43:52.548722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:43:52.553378 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:43:52.555809 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:43:52.565379 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:43:52.572396 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:43:52.572532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:43:52.578330 ignition[1037]: INFO : Ignition 2.21.0 Sep 10 23:43:52.579316 ignition[1037]: INFO : Stage: umount Sep 10 23:43:52.580680 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:43:52.580680 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:43:52.582383 ignition[1037]: INFO : umount: umount passed Sep 10 23:43:52.582383 ignition[1037]: INFO : Ignition finished successfully Sep 10 23:43:52.583479 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:43:52.583615 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:43:52.586252 systemd[1]: Stopped target network.target - Network. Sep 10 23:43:52.587022 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:43:52.587083 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:43:52.588440 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:43:52.588483 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:43:52.590375 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:43:52.590428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:43:52.592103 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:43:52.592144 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:43:52.594380 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:43:52.594423 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:43:52.595469 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:43:52.598269 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:43:52.601355 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:43:52.601486 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:43:52.608903 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:43:52.609228 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:43:52.609293 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:43:52.612065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:43:52.612278 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:43:52.612379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:43:52.615727 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:43:52.616195 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:43:52.617295 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:43:52.617337 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:43:52.619853 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:43:52.620994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:43:52.621073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:43:52.622582 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:43:52.622655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:43:52.624923 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:43:52.625023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:43:52.627540 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:43:52.630360 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:43:52.643877 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:43:52.643991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:43:52.650389 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:43:52.650551 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:43:52.652368 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:43:52.652409 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:43:52.653763 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:43:52.653796 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:43:52.655126 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:43:52.655171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:43:52.657203 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:43:52.657248 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:43:52.659496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:43:52.659553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:43:52.662525 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:43:52.663829 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:43:52.663890 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:43:52.666367 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:43:52.666417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:43:52.669024 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 23:43:52.669071 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:43:52.671537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:43:52.671591 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:43:52.673302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:43:52.673342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:43:52.687151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:43:52.687292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:43:52.689078 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:43:52.691191 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:43:52.711049 systemd[1]: Switching root. Sep 10 23:43:52.740858 systemd-journald[245]: Journal stopped Sep 10 23:43:53.485484 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 10 23:43:53.485538 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:43:53.485555 kernel: SELinux: policy capability open_perms=1 Sep 10 23:43:53.485578 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:43:53.485590 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:43:53.485625 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:43:53.485641 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:43:53.485652 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:43:53.485661 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:43:53.485673 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:43:53.485686 kernel: audit: type=1403 audit(1757547832.903:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:43:53.485697 systemd[1]: Successfully loaded SELinux policy in 49.885ms. Sep 10 23:43:53.485715 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.724ms. Sep 10 23:43:53.485727 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:43:53.485740 systemd[1]: Detected virtualization kvm. Sep 10 23:43:53.485750 systemd[1]: Detected architecture arm64. Sep 10 23:43:53.485762 systemd[1]: Detected first boot. Sep 10 23:43:53.485772 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:43:53.485781 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:43:53.485791 zram_generator::config[1084]: No configuration found. Sep 10 23:43:53.485807 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:43:53.485818 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:43:53.485829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:43:53.485839 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:43:53.485851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:43:53.485861 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:43:53.485871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:43:53.485882 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:43:53.485893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:43:53.485903 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:43:53.485913 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:43:53.485923 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:43:53.485933 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:43:53.485945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:43:53.485956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:43:53.485966 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:43:53.485978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:43:53.485988 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:43:53.485999 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:43:53.486009 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:43:53.486019 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:43:53.486031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:43:53.486041 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:43:53.486051 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:43:53.486079 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:43:53.486089 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:43:53.486099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:43:53.486109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:43:53.486119 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:43:53.486129 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:43:53.486141 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:43:53.486151 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:43:53.486162 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:43:53.486172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:43:53.486182 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:43:53.486192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:43:53.486202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:43:53.486212 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:43:53.486222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:43:53.486234 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:43:53.486244 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:43:53.486254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:43:53.486264 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:43:53.486275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:43:53.486285 systemd[1]: Reached target machines.target - Containers. Sep 10 23:43:53.486295 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:43:53.486305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:43:53.486317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:43:53.486327 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:43:53.486337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:43:53.486347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:43:53.486359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:43:53.486369 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:43:53.486379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:43:53.486389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:43:53.486399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:43:53.486411 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:43:53.486422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:43:53.486431 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:43:53.486442 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:43:53.486452 kernel: fuse: init (API version 7.41) Sep 10 23:43:53.486462 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:43:53.486472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:43:53.486482 kernel: loop: module loaded Sep 10 23:43:53.486491 kernel: ACPI: bus type drm_connector registered Sep 10 23:43:53.486502 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:43:53.486512 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:43:53.486522 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:43:53.486533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:43:53.486545 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:43:53.486555 systemd[1]: Stopped verity-setup.service. Sep 10 23:43:53.486604 systemd-journald[1155]: Collecting audit messages is disabled. Sep 10 23:43:53.486628 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:43:53.486640 systemd-journald[1155]: Journal started Sep 10 23:43:53.486662 systemd-journald[1155]: Runtime Journal (/run/log/journal/4a587fbf951e472686b2ad8e5ea7cbd3) is 6M, max 48.5M, 42.4M free. Sep 10 23:43:53.281412 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:43:53.306731 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:43:53.307162 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:43:53.489227 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:43:53.490636 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:43:53.491670 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:43:53.492545 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:43:53.493552 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:43:53.494763 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:43:53.496498 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:43:53.497848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:43:53.499079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:43:53.499265 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:43:53.500456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:43:53.500671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:43:53.501810 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:43:53.501968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:43:53.503038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:43:53.503205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:43:53.504411 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:43:53.504611 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:43:53.505681 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:43:53.505840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:43:53.507126 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:43:53.508298 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:43:53.509632 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:43:53.511134 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:43:53.523549 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:43:53.525886 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:43:53.527937 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:43:53.528926 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:43:53.528959 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:43:53.530685 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:43:53.537505 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:43:53.538560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:43:53.539852 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:43:53.541753 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:43:53.542873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:43:53.545769 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:43:53.546698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:43:53.547891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:43:53.550115 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:43:53.551627 systemd-journald[1155]: Time spent on flushing to /var/log/journal/4a587fbf951e472686b2ad8e5ea7cbd3 is 18.447ms for 883 entries. Sep 10 23:43:53.551627 systemd-journald[1155]: System Journal (/var/log/journal/4a587fbf951e472686b2ad8e5ea7cbd3) is 8M, max 195.6M, 187.6M free. Sep 10 23:43:53.580781 systemd-journald[1155]: Received client request to flush runtime journal. Sep 10 23:43:53.580881 kernel: loop0: detected capacity change from 0 to 107312 Sep 10 23:43:53.552391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:43:53.557369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:43:53.558881 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:43:53.560173 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:43:53.583306 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:43:53.588582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:43:53.590195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:43:53.592647 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:43:53.592497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:43:53.595391 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:43:53.606121 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 10 23:43:53.606143 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 10 23:43:53.611033 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:43:53.614192 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:43:53.631728 kernel: loop1: detected capacity change from 0 to 138376 Sep 10 23:43:53.646729 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:43:53.657634 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:43:53.660148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:43:53.663630 kernel: loop2: detected capacity change from 0 to 207008 Sep 10 23:43:53.682652 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 23:43:53.682669 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 23:43:53.686616 kernel: loop3: detected capacity change from 0 to 107312 Sep 10 23:43:53.688654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:43:53.693525 kernel: loop4: detected capacity change from 0 to 138376 Sep 10 23:43:53.716630 kernel: loop5: detected capacity change from 0 to 207008 Sep 10 23:43:53.725531 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:43:53.726140 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 10 23:43:53.732753 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:43:53.732770 systemd[1]: Reloading... Sep 10 23:43:53.787657 zram_generator::config[1252]: No configuration found. Sep 10 23:43:53.866717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:43:53.881498 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:43:53.929848 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:43:53.930156 systemd[1]: Reloading finished in 196 ms. Sep 10 23:43:53.966932 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:43:53.968362 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:43:53.985056 systemd[1]: Starting ensure-sysext.service... Sep 10 23:43:53.986907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:43:53.997234 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:43:53.997252 systemd[1]: Reloading... Sep 10 23:43:54.007001 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:43:54.007043 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:43:54.007278 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:43:54.007456 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:43:54.008106 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:43:54.008322 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 10 23:43:54.008369 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 10 23:43:54.011089 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:43:54.011104 systemd-tmpfiles[1286]: Skipping /boot Sep 10 23:43:54.021012 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:43:54.021032 systemd-tmpfiles[1286]: Skipping /boot Sep 10 23:43:54.047631 zram_generator::config[1316]: No configuration found. Sep 10 23:43:54.116093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:43:54.178908 systemd[1]: Reloading finished in 181 ms. Sep 10 23:43:54.201725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:43:54.208671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:43:54.215480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:43:54.221305 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:43:54.228956 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:43:54.239629 kernel: hrtimer: interrupt took 5436720 ns Sep 10 23:43:54.242780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:43:54.249395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:43:54.256785 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:43:54.263191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:43:54.274325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:43:54.280929 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:43:54.285278 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Sep 10 23:43:54.286145 augenrules[1377]: No rules Sep 10 23:43:54.287204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:43:54.288344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:43:54.288548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:43:54.293513 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:43:54.296854 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:43:54.297104 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:43:54.307821 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:43:54.310414 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:43:54.314608 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:43:54.316781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:43:54.316993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:43:54.319167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:43:54.319350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:43:54.322108 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:43:54.322277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:43:54.337001 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:43:54.349875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:43:54.352010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:43:54.354653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:43:54.359979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:43:54.365971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:43:54.374438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:43:54.375809 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:43:54.375949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:43:54.378935 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:43:54.381327 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:43:54.383691 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:43:54.385135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:43:54.385298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:43:54.386823 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:43:54.387022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:43:54.394952 systemd[1]: Finished ensure-sysext.service. Sep 10 23:43:54.397346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:43:54.397525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:43:54.406461 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:43:54.407448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:43:54.409368 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:43:54.410676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:43:54.410743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:43:54.412808 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:43:54.416661 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:43:54.424616 augenrules[1417]: /sbin/augenrules: No change Sep 10 23:43:54.441879 augenrules[1454]: No rules Sep 10 23:43:54.442494 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:43:54.442800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:43:54.478104 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:43:54.498706 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:43:54.499844 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:43:54.511360 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:43:54.511884 systemd-networkd[1422]: lo: Link UP Sep 10 23:43:54.511898 systemd-networkd[1422]: lo: Gained carrier Sep 10 23:43:54.512795 systemd-networkd[1422]: Enumeration completed Sep 10 23:43:54.513771 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:43:54.513798 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:43:54.514490 systemd-networkd[1422]: eth0: Link UP Sep 10 23:43:54.514660 systemd-networkd[1422]: eth0: Gained carrier Sep 10 23:43:54.514685 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:43:54.515783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:43:54.516768 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:43:54.518877 systemd-resolved[1358]: Positive Trust Anchors: Sep 10 23:43:54.518896 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:43:54.518929 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:43:54.519042 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:43:54.527728 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:43:54.528283 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Sep 10 23:43:54.529364 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:43:54.529418 systemd-timesyncd[1441]: Initial clock synchronization to Wed 2025-09-10 23:43:54.579988 UTC. Sep 10 23:43:54.530140 systemd-resolved[1358]: Defaulting to hostname 'linux'. Sep 10 23:43:54.531091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:43:54.532986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:43:54.535667 systemd[1]: Reached target network.target - Network. Sep 10 23:43:54.537022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:43:54.538168 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:43:54.539300 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:43:54.540402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:43:54.541755 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:43:54.542677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:43:54.543924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:43:54.544923 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:43:54.544958 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:43:54.545900 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:43:54.547665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:43:54.549947 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:43:54.552751 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:43:54.553864 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:43:54.554798 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:43:54.558280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:43:54.560136 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:43:54.563646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:43:54.564955 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:43:54.566242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:43:54.568413 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:43:54.569294 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:43:54.570119 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:43:54.570153 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:43:54.571585 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:43:54.574962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:43:54.576874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:43:54.584753 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:43:54.586720 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:43:54.587485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:43:54.589323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:43:54.592388 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:43:54.594500 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:43:54.598197 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:43:54.599111 jq[1481]: false Sep 10 23:43:54.603531 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:43:54.605393 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:43:54.605997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:43:54.607820 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:43:54.609606 extend-filesystems[1482]: Found /dev/vda6 Sep 10 23:43:54.611332 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:43:54.615620 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:43:54.619005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:43:54.619214 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:43:54.620182 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:43:54.620374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:43:54.624757 extend-filesystems[1482]: Found /dev/vda9 Sep 10 23:43:54.627869 extend-filesystems[1482]: Checking size of /dev/vda9 Sep 10 23:43:54.630442 jq[1498]: true Sep 10 23:43:54.645675 update_engine[1496]: I20250910 23:43:54.644676 1496 main.cc:92] Flatcar Update Engine starting Sep 10 23:43:54.651945 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:43:54.652223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:43:54.652782 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:43:54.653941 jq[1522]: true Sep 10 23:43:54.661222 extend-filesystems[1482]: Resized partition /dev/vda9 Sep 10 23:43:54.664138 extend-filesystems[1533]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:43:54.670289 dbus-daemon[1479]: [system] SELinux support is enabled Sep 10 23:43:54.670495 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:43:54.673638 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:43:54.674156 tar[1504]: linux-arm64/LICENSE Sep 10 23:43:54.674156 tar[1504]: linux-arm64/helm Sep 10 23:43:54.675231 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:43:54.673679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:43:54.675206 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:43:54.675224 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:43:54.683091 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:43:54.683290 update_engine[1496]: I20250910 23:43:54.683198 1496 update_check_scheduler.cc:74] Next update check in 8m5s Sep 10 23:43:54.689185 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:43:54.698623 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:43:54.714811 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:43:54.714811 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:43:54.714811 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:43:54.715803 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:43:54.725239 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Sep 10 23:43:54.716499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:43:54.727012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:43:54.741211 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:43:54.747888 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:43:54.749530 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:43:54.772342 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:43:54.860642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:43:54.883857 containerd[1523]: time="2025-09-10T23:43:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:43:54.887638 containerd[1523]: time="2025-09-10T23:43:54.887430960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:43:54.900310 containerd[1523]: time="2025-09-10T23:43:54.900248320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.4µs" Sep 10 23:43:54.900310 containerd[1523]: time="2025-09-10T23:43:54.900297240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:43:54.900310 containerd[1523]: time="2025-09-10T23:43:54.900317440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:43:54.900519 containerd[1523]: time="2025-09-10T23:43:54.900494880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:43:54.900544 containerd[1523]: time="2025-09-10T23:43:54.900519120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:43:54.900620 containerd[1523]: time="2025-09-10T23:43:54.900584880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:43:54.900709 containerd[1523]: time="2025-09-10T23:43:54.900682280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:43:54.900709 containerd[1523]: time="2025-09-10T23:43:54.900705600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:43:54.900974 containerd[1523]: time="2025-09-10T23:43:54.900944560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:43:54.900974 containerd[1523]: time="2025-09-10T23:43:54.900971200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901030 containerd[1523]: time="2025-09-10T23:43:54.900982720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901030 containerd[1523]: time="2025-09-10T23:43:54.900991400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901087 containerd[1523]: time="2025-09-10T23:43:54.901064760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901316 containerd[1523]: time="2025-09-10T23:43:54.901290880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901358 containerd[1523]: time="2025-09-10T23:43:54.901330040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:43:54.901358 containerd[1523]: time="2025-09-10T23:43:54.901341640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:43:54.901414 containerd[1523]: time="2025-09-10T23:43:54.901396840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:43:54.901984 containerd[1523]: time="2025-09-10T23:43:54.901941040Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:43:54.902206 containerd[1523]: time="2025-09-10T23:43:54.902171960Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:43:54.905930 containerd[1523]: time="2025-09-10T23:43:54.905882120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:43:54.906037 containerd[1523]: time="2025-09-10T23:43:54.905981160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:43:54.906057 containerd[1523]: time="2025-09-10T23:43:54.905997920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:43:54.906057 containerd[1523]: time="2025-09-10T23:43:54.906052360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:43:54.906097 containerd[1523]: time="2025-09-10T23:43:54.906066320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:43:54.906097 containerd[1523]: time="2025-09-10T23:43:54.906080520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:43:54.906097 containerd[1523]: time="2025-09-10T23:43:54.906092080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:43:54.906142 containerd[1523]: time="2025-09-10T23:43:54.906105000Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:43:54.906142 containerd[1523]: time="2025-09-10T23:43:54.906117240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:43:54.906142 containerd[1523]: time="2025-09-10T23:43:54.906128760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:43:54.906142 containerd[1523]: time="2025-09-10T23:43:54.906138760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:43:54.906205 containerd[1523]: time="2025-09-10T23:43:54.906151760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906282880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906313600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906333000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906344400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906355440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906366560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906377360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906387360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906407720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906418960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:43:54.906509 containerd[1523]: time="2025-09-10T23:43:54.906429880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:43:54.906926 containerd[1523]: time="2025-09-10T23:43:54.906679840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:43:54.906926 containerd[1523]: time="2025-09-10T23:43:54.906703040Z" level=info msg="Start snapshots syncer" Sep 10 23:43:54.906926 containerd[1523]: time="2025-09-10T23:43:54.906749680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:43:54.906854 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:43:54.908874 containerd[1523]: time="2025-09-10T23:43:54.907117120Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:43:54.908874 containerd[1523]: time="2025-09-10T23:43:54.907170680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:43:54.907100 systemd-logind[1495]: New seat seat0. Sep 10 23:43:54.908838 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:43:54.911206 containerd[1523]: time="2025-09-10T23:43:54.910945920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911772880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911832720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911852480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911869200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911888080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911902400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911918480Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911972080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.911989040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912004680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912056880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912077120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912088040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912101480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912113200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912126200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912143000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912228760Z" level=info msg="runtime interface created" Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912234240Z" level=info msg="created NRI interface" Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912243240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912267120Z" level=info msg="Connect containerd service" Sep 10 23:43:54.912960 containerd[1523]: time="2025-09-10T23:43:54.912313600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:43:54.916197 containerd[1523]: time="2025-09-10T23:43:54.916146760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:43:55.002462 containerd[1523]: time="2025-09-10T23:43:55.002332182Z" level=info msg="Start subscribing containerd event" Sep 10 23:43:55.002693 containerd[1523]: time="2025-09-10T23:43:55.002669289Z" level=info msg="Start recovering state" Sep 10 23:43:55.003009 containerd[1523]: time="2025-09-10T23:43:55.002988581Z" level=info msg="Start event monitor" Sep 10 23:43:55.003157 containerd[1523]: time="2025-09-10T23:43:55.003141786Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:43:55.003267 containerd[1523]: time="2025-09-10T23:43:55.003215058Z" level=info msg="Start streaming server" Sep 10 23:43:55.003335 containerd[1523]: time="2025-09-10T23:43:55.003322478Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:43:55.003478 containerd[1523]: time="2025-09-10T23:43:55.003454015Z" level=info msg="runtime interface starting up..." Sep 10 23:43:55.003531 containerd[1523]: time="2025-09-10T23:43:55.003520184Z" level=info msg="starting plugins..." Sep 10 23:43:55.003706 containerd[1523]: time="2025-09-10T23:43:55.003689561Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:43:55.004367 containerd[1523]: time="2025-09-10T23:43:55.004339941Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:43:55.004441 containerd[1523]: time="2025-09-10T23:43:55.004396239Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:43:55.004467 containerd[1523]: time="2025-09-10T23:43:55.004452538Z" level=info msg="containerd successfully booted in 0.121032s" Sep 10 23:43:55.004562 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:43:55.165956 tar[1504]: linux-arm64/README.md Sep 10 23:43:55.183481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:43:55.404583 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:43:55.425117 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:43:55.427891 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:43:55.451053 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:43:55.451337 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:43:55.454154 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:43:55.485075 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:43:55.487898 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:43:55.490120 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:43:55.491393 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:43:56.149944 systemd-networkd[1422]: eth0: Gained IPv6LL Sep 10 23:43:56.152751 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:43:56.154752 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:43:56.157203 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:43:56.159463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:43:56.166270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:43:56.200142 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:43:56.202308 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:43:56.202574 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:43:56.206721 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:43:56.741690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:43:56.743116 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:43:56.747586 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:43:56.747725 systemd[1]: Startup finished in 2.054s (kernel) + 5.304s (initrd) + 3.894s (userspace) = 11.253s. Sep 10 23:43:57.088156 kubelet[1631]: E0910 23:43:57.088021 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:43:57.090462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:43:57.090625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:43:57.091029 systemd[1]: kubelet.service: Consumed 766ms CPU time, 256.3M memory peak. Sep 10 23:44:00.594429 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:44:00.595773 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:45726.service - OpenSSH per-connection server daemon (10.0.0.1:45726). Sep 10 23:44:00.688525 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 45726 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:00.693530 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:00.699894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:44:00.700864 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:44:00.706535 systemd-logind[1495]: New session 1 of user core. Sep 10 23:44:00.726682 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:44:00.732421 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:44:00.747041 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:44:00.750263 systemd-logind[1495]: New session c1 of user core. Sep 10 23:44:00.860423 systemd[1648]: Queued start job for default target default.target. Sep 10 23:44:00.877671 systemd[1648]: Created slice app.slice - User Application Slice. Sep 10 23:44:00.877703 systemd[1648]: Reached target paths.target - Paths. Sep 10 23:44:00.877740 systemd[1648]: Reached target timers.target - Timers. Sep 10 23:44:00.879110 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:44:00.889204 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:44:00.889276 systemd[1648]: Reached target sockets.target - Sockets. Sep 10 23:44:00.889318 systemd[1648]: Reached target basic.target - Basic System. Sep 10 23:44:00.889346 systemd[1648]: Reached target default.target - Main User Target. Sep 10 23:44:00.889372 systemd[1648]: Startup finished in 132ms. Sep 10 23:44:00.889655 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:44:00.891539 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:44:00.952214 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:45740.service - OpenSSH per-connection server daemon (10.0.0.1:45740). Sep 10 23:44:01.012275 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 45740 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:01.013709 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:01.019015 systemd-logind[1495]: New session 2 of user core. Sep 10 23:44:01.030840 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:44:01.086002 sshd[1661]: Connection closed by 10.0.0.1 port 45740 Sep 10 23:44:01.087317 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:01.106422 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:45740.service: Deactivated successfully. Sep 10 23:44:01.108321 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:44:01.109225 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:44:01.112431 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:45746.service - OpenSSH per-connection server daemon (10.0.0.1:45746). Sep 10 23:44:01.113711 systemd-logind[1495]: Removed session 2. Sep 10 23:44:01.193390 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 45746 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:01.197045 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:01.205673 systemd-logind[1495]: New session 3 of user core. Sep 10 23:44:01.219883 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:44:01.271698 sshd[1669]: Connection closed by 10.0.0.1 port 45746 Sep 10 23:44:01.273861 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:01.296028 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:45746.service: Deactivated successfully. Sep 10 23:44:01.298176 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:44:01.299193 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:44:01.302022 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:45750.service - OpenSSH per-connection server daemon (10.0.0.1:45750). Sep 10 23:44:01.302587 systemd-logind[1495]: Removed session 3. Sep 10 23:44:01.353414 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 45750 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:01.354893 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:01.360679 systemd-logind[1495]: New session 4 of user core. Sep 10 23:44:01.369881 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:44:01.424221 sshd[1677]: Connection closed by 10.0.0.1 port 45750 Sep 10 23:44:01.424744 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:01.435974 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:45750.service: Deactivated successfully. Sep 10 23:44:01.439535 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:44:01.440556 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:44:01.445080 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Sep 10 23:44:01.445875 systemd-logind[1495]: Removed session 4. Sep 10 23:44:01.520652 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:01.522152 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:01.526372 systemd-logind[1495]: New session 5 of user core. Sep 10 23:44:01.541846 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:44:01.600635 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:44:01.600919 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:44:01.613531 sudo[1686]: pam_unix(sudo:session): session closed for user root Sep 10 23:44:01.615160 sshd[1685]: Connection closed by 10.0.0.1 port 45752 Sep 10 23:44:01.615731 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:01.628940 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:45752.service: Deactivated successfully. Sep 10 23:44:01.632254 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:44:01.633125 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:44:01.636012 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:45762.service - OpenSSH per-connection server daemon (10.0.0.1:45762). Sep 10 23:44:01.636519 systemd-logind[1495]: Removed session 5. Sep 10 23:44:01.698528 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 45762 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:01.700071 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:01.705580 systemd-logind[1495]: New session 6 of user core. Sep 10 23:44:01.713835 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:44:01.772450 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:44:01.772797 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:44:01.849063 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 10 23:44:01.855057 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:44:01.855334 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:44:01.866505 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:44:01.924673 augenrules[1718]: No rules Sep 10 23:44:01.925425 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:44:01.925694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:44:01.928078 sudo[1695]: pam_unix(sudo:session): session closed for user root Sep 10 23:44:01.929509 sshd[1694]: Connection closed by 10.0.0.1 port 45762 Sep 10 23:44:01.929778 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:01.943627 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:45762.service: Deactivated successfully. Sep 10 23:44:01.945927 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:44:01.947393 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:44:01.952484 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:45776.service - OpenSSH per-connection server daemon (10.0.0.1:45776). Sep 10 23:44:01.953161 systemd-logind[1495]: Removed session 6. Sep 10 23:44:02.016962 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 45776 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:44:02.018429 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:44:02.023362 systemd-logind[1495]: New session 7 of user core. Sep 10 23:44:02.031805 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:44:02.083943 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:44:02.084232 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:44:02.400541 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:44:02.428104 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:44:02.658142 dockerd[1750]: time="2025-09-10T23:44:02.658004225Z" level=info msg="Starting up" Sep 10 23:44:02.659525 dockerd[1750]: time="2025-09-10T23:44:02.659488460Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:44:02.705861 dockerd[1750]: time="2025-09-10T23:44:02.705799335Z" level=info msg="Loading containers: start." Sep 10 23:44:02.715630 kernel: Initializing XFRM netlink socket Sep 10 23:44:02.945222 systemd-networkd[1422]: docker0: Link UP Sep 10 23:44:02.949467 dockerd[1750]: time="2025-09-10T23:44:02.949420420Z" level=info msg="Loading containers: done." Sep 10 23:44:02.964288 dockerd[1750]: time="2025-09-10T23:44:02.964236205Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:44:02.964437 dockerd[1750]: time="2025-09-10T23:44:02.964330796Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:44:02.964558 dockerd[1750]: time="2025-09-10T23:44:02.964535689Z" level=info msg="Initializing buildkit" Sep 10 23:44:02.992429 dockerd[1750]: time="2025-09-10T23:44:02.992383818Z" level=info msg="Completed buildkit initialization" Sep 10 23:44:03.000101 dockerd[1750]: time="2025-09-10T23:44:03.000035783Z" level=info msg="Daemon has completed initialization" Sep 10 23:44:03.000249 dockerd[1750]: time="2025-09-10T23:44:03.000118710Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:44:03.000398 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:44:03.705058 containerd[1523]: time="2025-09-10T23:44:03.704380461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 23:44:04.329014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960764430.mount: Deactivated successfully. Sep 10 23:44:05.330794 containerd[1523]: time="2025-09-10T23:44:05.330732793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:05.331792 containerd[1523]: time="2025-09-10T23:44:05.331331989Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 10 23:44:05.332214 containerd[1523]: time="2025-09-10T23:44:05.332184368Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:05.335697 containerd[1523]: time="2025-09-10T23:44:05.335654421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:05.337125 containerd[1523]: time="2025-09-10T23:44:05.337090009Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.632653364s" Sep 10 23:44:05.337259 containerd[1523]: time="2025-09-10T23:44:05.337241902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 23:44:05.338019 containerd[1523]: time="2025-09-10T23:44:05.337888778Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 23:44:06.383626 containerd[1523]: time="2025-09-10T23:44:06.383099453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:06.384015 containerd[1523]: time="2025-09-10T23:44:06.383921456Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 10 23:44:06.384824 containerd[1523]: time="2025-09-10T23:44:06.384790892Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:06.387551 containerd[1523]: time="2025-09-10T23:44:06.387493387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:06.389681 containerd[1523]: time="2025-09-10T23:44:06.389617020Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.051487805s" Sep 10 23:44:06.389681 containerd[1523]: time="2025-09-10T23:44:06.389665936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 23:44:06.390757 containerd[1523]: time="2025-09-10T23:44:06.390479566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 23:44:07.341133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:44:07.342643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:44:07.509545 containerd[1523]: time="2025-09-10T23:44:07.509489243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:07.510030 containerd[1523]: time="2025-09-10T23:44:07.509992139Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 10 23:44:07.511057 containerd[1523]: time="2025-09-10T23:44:07.510989317Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:07.513955 containerd[1523]: time="2025-09-10T23:44:07.513908507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:07.514957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:07.515254 containerd[1523]: time="2025-09-10T23:44:07.515219184Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.1247018s" Sep 10 23:44:07.515345 containerd[1523]: time="2025-09-10T23:44:07.515329305Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 23:44:07.516337 containerd[1523]: time="2025-09-10T23:44:07.515862805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 23:44:07.519537 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:44:07.560264 kubelet[2036]: E0910 23:44:07.560199 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:44:07.563489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:44:07.563640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:44:07.564736 systemd[1]: kubelet.service: Consumed 158ms CPU time, 108M memory peak. Sep 10 23:44:08.470031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890916692.mount: Deactivated successfully. Sep 10 23:44:08.842699 containerd[1523]: time="2025-09-10T23:44:08.841384513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:08.845446 containerd[1523]: time="2025-09-10T23:44:08.845391888Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 10 23:44:08.846171 containerd[1523]: time="2025-09-10T23:44:08.846113198Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:08.848154 containerd[1523]: time="2025-09-10T23:44:08.848112299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:08.848836 containerd[1523]: time="2025-09-10T23:44:08.848800523Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.332900384s" Sep 10 23:44:08.848836 containerd[1523]: time="2025-09-10T23:44:08.848833808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 23:44:08.849208 containerd[1523]: time="2025-09-10T23:44:08.849182406Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:44:09.392209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272720917.mount: Deactivated successfully. Sep 10 23:44:09.996593 containerd[1523]: time="2025-09-10T23:44:09.996535632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:09.999624 containerd[1523]: time="2025-09-10T23:44:09.999563205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:44:10.000692 containerd[1523]: time="2025-09-10T23:44:10.000662052Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:10.004022 containerd[1523]: time="2025-09-10T23:44:10.003402836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:10.004577 containerd[1523]: time="2025-09-10T23:44:10.004535962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.155325599s" Sep 10 23:44:10.004624 containerd[1523]: time="2025-09-10T23:44:10.004576090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:44:10.005365 containerd[1523]: time="2025-09-10T23:44:10.005162717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:44:10.412853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172762526.mount: Deactivated successfully. Sep 10 23:44:10.418082 containerd[1523]: time="2025-09-10T23:44:10.418033192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:44:10.418502 containerd[1523]: time="2025-09-10T23:44:10.418467956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:44:10.419332 containerd[1523]: time="2025-09-10T23:44:10.419295154Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:44:10.421190 containerd[1523]: time="2025-09-10T23:44:10.421152993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:44:10.421733 containerd[1523]: time="2025-09-10T23:44:10.421702535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 416.493002ms" Sep 10 23:44:10.421777 containerd[1523]: time="2025-09-10T23:44:10.421734013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:44:10.422514 containerd[1523]: time="2025-09-10T23:44:10.422209266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 23:44:10.922582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287989154.mount: Deactivated successfully. Sep 10 23:44:12.425322 containerd[1523]: time="2025-09-10T23:44:12.425269250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:12.425960 containerd[1523]: time="2025-09-10T23:44:12.425930912Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 10 23:44:12.427682 containerd[1523]: time="2025-09-10T23:44:12.427649653Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:12.430464 containerd[1523]: time="2025-09-10T23:44:12.430427836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:12.431428 containerd[1523]: time="2025-09-10T23:44:12.431398465Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.009158202s" Sep 10 23:44:12.431465 containerd[1523]: time="2025-09-10T23:44:12.431433582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 23:44:16.942101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:16.942241 systemd[1]: kubelet.service: Consumed 158ms CPU time, 108M memory peak. Sep 10 23:44:16.944213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:44:16.966748 systemd[1]: Reload requested from client PID 2194 ('systemctl') (unit session-7.scope)... Sep 10 23:44:16.966759 systemd[1]: Reloading... Sep 10 23:44:17.029647 zram_generator::config[2236]: No configuration found. Sep 10 23:44:17.098591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:44:17.185375 systemd[1]: Reloading finished in 218 ms. Sep 10 23:44:17.243305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 23:44:17.243546 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 23:44:17.243948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:17.244119 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95M memory peak. Sep 10 23:44:17.247519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:44:17.399647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:17.410933 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:44:17.451587 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:44:17.451587 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:44:17.451587 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:44:17.452481 kubelet[2281]: I0910 23:44:17.451673 2281 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:44:18.601953 kubelet[2281]: I0910 23:44:18.601905 2281 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:44:18.601953 kubelet[2281]: I0910 23:44:18.601941 2281 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:44:18.602292 kubelet[2281]: I0910 23:44:18.602211 2281 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:44:18.632820 kubelet[2281]: E0910 23:44:18.632766 2281 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:44:18.634202 kubelet[2281]: I0910 23:44:18.634169 2281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:44:18.639965 kubelet[2281]: I0910 23:44:18.639927 2281 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:44:18.643052 kubelet[2281]: I0910 23:44:18.643021 2281 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:44:18.643713 kubelet[2281]: I0910 23:44:18.643666 2281 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:44:18.643883 kubelet[2281]: I0910 23:44:18.643708 2281 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:44:18.643974 kubelet[2281]: I0910 23:44:18.643946 2281 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:44:18.643974 kubelet[2281]: I0910 23:44:18.643955 2281 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:44:18.644185 kubelet[2281]: I0910 23:44:18.644158 2281 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:44:18.649029 kubelet[2281]: I0910 23:44:18.648998 2281 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:44:18.649060 kubelet[2281]: I0910 23:44:18.649030 2281 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:44:18.649060 kubelet[2281]: I0910 23:44:18.649056 2281 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:44:18.649100 kubelet[2281]: I0910 23:44:18.649068 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:44:18.649981 kubelet[2281]: W0910 23:44:18.649896 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 10 23:44:18.650039 kubelet[2281]: E0910 23:44:18.649997 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:44:18.652690 kubelet[2281]: W0910 23:44:18.652655 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 10 23:44:18.652805 kubelet[2281]: E0910 23:44:18.652782 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:44:18.653024 kubelet[2281]: I0910 23:44:18.652980 2281 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:44:18.653889 kubelet[2281]: I0910 23:44:18.653800 2281 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:44:18.654023 kubelet[2281]: W0910 23:44:18.654012 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:44:18.655322 kubelet[2281]: I0910 23:44:18.655298 2281 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:44:18.655394 kubelet[2281]: I0910 23:44:18.655347 2281 server.go:1287] "Started kubelet" Sep 10 23:44:18.655838 kubelet[2281]: I0910 23:44:18.655795 2281 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:44:18.656960 kubelet[2281]: I0910 23:44:18.656931 2281 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:44:18.661325 kubelet[2281]: I0910 23:44:18.661259 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:44:18.661543 kubelet[2281]: I0910 23:44:18.661525 2281 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:44:18.661890 kubelet[2281]: I0910 23:44:18.661865 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:44:18.663097 kubelet[2281]: I0910 23:44:18.663041 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:44:18.663855 kubelet[2281]: I0910 23:44:18.663833 2281 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:44:18.664887 kubelet[2281]: E0910 23:44:18.664837 2281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:44:18.665472 kubelet[2281]: I0910 23:44:18.665420 2281 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:44:18.665530 kubelet[2281]: I0910 23:44:18.665509 2281 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:44:18.666615 kubelet[2281]: E0910 23:44:18.666481 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 10 23:44:18.667081 kubelet[2281]: W0910 23:44:18.666932 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 10 23:44:18.667610 kubelet[2281]: E0910 23:44:18.667280 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:44:18.667610 kubelet[2281]: I0910 23:44:18.667521 2281 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:44:18.667610 kubelet[2281]: I0910 23:44:18.667532 2281 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:44:18.667610 kubelet[2281]: E0910 23:44:18.667350 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186410772ee60b9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:44:18.655316895 +0000 UTC m=+1.241061463,LastTimestamp:2025-09-10 23:44:18.655316895 +0000 UTC m=+1.241061463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:44:18.667758 kubelet[2281]: I0910 23:44:18.667636 2281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:44:18.681989 kubelet[2281]: I0910 23:44:18.681954 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:44:18.682323 kubelet[2281]: I0910 23:44:18.682086 2281 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:44:18.682323 kubelet[2281]: I0910 23:44:18.682242 2281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:44:18.682323 kubelet[2281]: I0910 23:44:18.682262 2281 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:44:18.683563 kubelet[2281]: I0910 23:44:18.683544 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:44:18.683643 kubelet[2281]: I0910 23:44:18.683634 2281 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:44:18.683711 kubelet[2281]: I0910 23:44:18.683701 2281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:44:18.683753 kubelet[2281]: I0910 23:44:18.683746 2281 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:44:18.683857 kubelet[2281]: E0910 23:44:18.683840 2281 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:44:18.685698 kubelet[2281]: I0910 23:44:18.685677 2281 policy_none.go:49] "None policy: Start" Sep 10 23:44:18.685698 kubelet[2281]: I0910 23:44:18.685699 2281 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:44:18.685782 kubelet[2281]: I0910 23:44:18.685710 2281 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:44:18.688210 kubelet[2281]: W0910 23:44:18.688138 2281 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 10 23:44:18.688435 kubelet[2281]: E0910 23:44:18.688209 2281 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:44:18.692818 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:44:18.707352 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:44:18.710263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:44:18.728544 kubelet[2281]: I0910 23:44:18.728504 2281 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:44:18.728749 kubelet[2281]: I0910 23:44:18.728726 2281 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:44:18.728791 kubelet[2281]: I0910 23:44:18.728745 2281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:44:18.729785 kubelet[2281]: I0910 23:44:18.729731 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:44:18.730070 kubelet[2281]: E0910 23:44:18.730052 2281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:44:18.730689 kubelet[2281]: E0910 23:44:18.730626 2281 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:44:18.791683 systemd[1]: Created slice kubepods-burstable-pod8b6a5770719f8cccdc8c7b30f647d8f9.slice - libcontainer container kubepods-burstable-pod8b6a5770719f8cccdc8c7b30f647d8f9.slice. Sep 10 23:44:18.822221 kubelet[2281]: E0910 23:44:18.821868 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:18.824610 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 23:44:18.834606 kubelet[2281]: I0910 23:44:18.831799 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:44:18.834897 kubelet[2281]: E0910 23:44:18.834845 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 10 23:44:18.843574 kubelet[2281]: E0910 23:44:18.843556 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:18.845710 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 23:44:18.847389 kubelet[2281]: E0910 23:44:18.847208 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:18.866978 kubelet[2281]: E0910 23:44:18.866887 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 10 23:44:18.967042 kubelet[2281]: I0910 23:44:18.966982 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:18.967042 kubelet[2281]: I0910 23:44:18.967039 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:44:18.967194 kubelet[2281]: I0910 23:44:18.967057 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:18.967194 kubelet[2281]: I0910 23:44:18.967087 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:18.967194 kubelet[2281]: I0910 23:44:18.967106 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:18.967194 kubelet[2281]: I0910 23:44:18.967120 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:18.967194 kubelet[2281]: I0910 23:44:18.967137 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:18.967291 kubelet[2281]: I0910 23:44:18.967159 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:18.967291 kubelet[2281]: I0910 23:44:18.967173 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:19.036629 kubelet[2281]: I0910 23:44:19.036489 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:44:19.036840 kubelet[2281]: E0910 23:44:19.036817 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 10 23:44:19.123382 containerd[1523]: time="2025-09-10T23:44:19.123279458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6a5770719f8cccdc8c7b30f647d8f9,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:19.140905 containerd[1523]: time="2025-09-10T23:44:19.140709538Z" level=info msg="connecting to shim 86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5" address="unix:///run/containerd/s/0f805f5109db02b9781a08a61ba575458b5c16b5c0103de4e2adab3b4b87f859" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:19.144828 containerd[1523]: time="2025-09-10T23:44:19.144794254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:19.151085 containerd[1523]: time="2025-09-10T23:44:19.150996318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:19.171803 systemd[1]: Started cri-containerd-86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5.scope - libcontainer container 86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5. Sep 10 23:44:19.178188 containerd[1523]: time="2025-09-10T23:44:19.177685085Z" level=info msg="connecting to shim bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13" address="unix:///run/containerd/s/a3f1c72cbfd193cc71bfdff7b70673fd58164e14affdc9862a7d5c6a36176be5" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:19.181338 containerd[1523]: time="2025-09-10T23:44:19.181306648Z" level=info msg="connecting to shim c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd" address="unix:///run/containerd/s/65d9204da88140e0ad24a8f3b774df04ef02d37d3b739ff0ca493a73689ce509" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:19.207747 systemd[1]: Started cri-containerd-bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13.scope - libcontainer container bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13. Sep 10 23:44:19.217252 systemd[1]: Started cri-containerd-c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd.scope - libcontainer container c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd. Sep 10 23:44:19.243636 containerd[1523]: time="2025-09-10T23:44:19.243576741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6a5770719f8cccdc8c7b30f647d8f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5\"" Sep 10 23:44:19.250360 containerd[1523]: time="2025-09-10T23:44:19.250319330Z" level=info msg="CreateContainer within sandbox \"86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:44:19.264127 containerd[1523]: time="2025-09-10T23:44:19.264089061Z" level=info msg="Container ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:19.264244 containerd[1523]: time="2025-09-10T23:44:19.264184405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13\"" Sep 10 23:44:19.266620 containerd[1523]: time="2025-09-10T23:44:19.266573777Z" level=info msg="CreateContainer within sandbox \"bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:44:19.267349 kubelet[2281]: E0910 23:44:19.267262 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 10 23:44:19.274629 containerd[1523]: time="2025-09-10T23:44:19.274371038Z" level=info msg="CreateContainer within sandbox \"86a15583fadb8a33abdc36f2af6f56e7e5a62060dd3b7b22667c3117321ac6e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e\"" Sep 10 23:44:19.275648 containerd[1523]: time="2025-09-10T23:44:19.275610354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd\"" Sep 10 23:44:19.275702 containerd[1523]: time="2025-09-10T23:44:19.275676518Z" level=info msg="StartContainer for \"ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e\"" Sep 10 23:44:19.277305 containerd[1523]: time="2025-09-10T23:44:19.277234209Z" level=info msg="connecting to shim ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e" address="unix:///run/containerd/s/0f805f5109db02b9781a08a61ba575458b5c16b5c0103de4e2adab3b4b87f859" protocol=ttrpc version=3 Sep 10 23:44:19.278261 containerd[1523]: time="2025-09-10T23:44:19.278232123Z" level=info msg="Container 51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:19.279056 containerd[1523]: time="2025-09-10T23:44:19.279035665Z" level=info msg="CreateContainer within sandbox \"c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:44:19.286039 containerd[1523]: time="2025-09-10T23:44:19.286001124Z" level=info msg="CreateContainer within sandbox \"bbeacdffd4370c8a35973e2862e08f5ce06e0df025d0c745fd7b3c79da0b8a13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899\"" Sep 10 23:44:19.286421 containerd[1523]: time="2025-09-10T23:44:19.286231360Z" level=info msg="Container 3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:19.287494 containerd[1523]: time="2025-09-10T23:44:19.287460949Z" level=info msg="StartContainer for \"51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899\"" Sep 10 23:44:19.289071 containerd[1523]: time="2025-09-10T23:44:19.289043417Z" level=info msg="connecting to shim 51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899" address="unix:///run/containerd/s/a3f1c72cbfd193cc71bfdff7b70673fd58164e14affdc9862a7d5c6a36176be5" protocol=ttrpc version=3 Sep 10 23:44:19.293681 containerd[1523]: time="2025-09-10T23:44:19.293643080Z" level=info msg="CreateContainer within sandbox \"c0b6951d14a93294c74e3033ed3610e41f61e1c463999ea5dacc6f355c044acd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0\"" Sep 10 23:44:19.294043 containerd[1523]: time="2025-09-10T23:44:19.294016812Z" level=info msg="StartContainer for \"3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0\"" Sep 10 23:44:19.295159 containerd[1523]: time="2025-09-10T23:44:19.295126401Z" level=info msg="connecting to shim 3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0" address="unix:///run/containerd/s/65d9204da88140e0ad24a8f3b774df04ef02d37d3b739ff0ca493a73689ce509" protocol=ttrpc version=3 Sep 10 23:44:19.297752 systemd[1]: Started cri-containerd-ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e.scope - libcontainer container ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e. Sep 10 23:44:19.309737 systemd[1]: Started cri-containerd-51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899.scope - libcontainer container 51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899. Sep 10 23:44:19.314715 systemd[1]: Started cri-containerd-3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0.scope - libcontainer container 3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0. Sep 10 23:44:19.351511 containerd[1523]: time="2025-09-10T23:44:19.351458048Z" level=info msg="StartContainer for \"ff43a5a638df97a2be5c342af9dbf1b576b3507190d87936d21c59d10e8a819e\" returns successfully" Sep 10 23:44:19.366005 containerd[1523]: time="2025-09-10T23:44:19.365929571Z" level=info msg="StartContainer for \"51b5d0542abaeedde064a7c1737e400f82a7e82a7b8da14a7bdafe902d64b899\" returns successfully" Sep 10 23:44:19.368542 containerd[1523]: time="2025-09-10T23:44:19.368427937Z" level=info msg="StartContainer for \"3d61edd5a093983214c471ca571e72df00b5d6d419b2e006efae18390dd5ebc0\" returns successfully" Sep 10 23:44:19.439954 kubelet[2281]: I0910 23:44:19.439867 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:44:19.695129 kubelet[2281]: E0910 23:44:19.694898 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:19.697510 kubelet[2281]: E0910 23:44:19.697484 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:19.701252 kubelet[2281]: E0910 23:44:19.701227 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:20.650208 kubelet[2281]: I0910 23:44:20.650178 2281 apiserver.go:52] "Watching apiserver" Sep 10 23:44:20.650429 kubelet[2281]: E0910 23:44:20.650324 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:44:20.666391 kubelet[2281]: I0910 23:44:20.666347 2281 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:44:20.704267 kubelet[2281]: E0910 23:44:20.704233 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:20.704680 kubelet[2281]: E0910 23:44:20.704350 2281 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:44:20.819132 kubelet[2281]: I0910 23:44:20.819081 2281 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:44:20.864489 kubelet[2281]: I0910 23:44:20.864433 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:20.871835 kubelet[2281]: E0910 23:44:20.871792 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:20.871835 kubelet[2281]: I0910 23:44:20.871826 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:20.873744 kubelet[2281]: E0910 23:44:20.873712 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:20.873744 kubelet[2281]: I0910 23:44:20.873738 2281 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:44:20.875350 kubelet[2281]: E0910 23:44:20.875316 2281 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:44:22.862138 systemd[1]: Reload requested from client PID 2555 ('systemctl') (unit session-7.scope)... Sep 10 23:44:22.862152 systemd[1]: Reloading... Sep 10 23:44:22.942642 zram_generator::config[2604]: No configuration found. Sep 10 23:44:23.008718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:44:23.108433 systemd[1]: Reloading finished in 245 ms. Sep 10 23:44:23.136482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:44:23.149557 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:44:23.151666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:23.151743 systemd[1]: kubelet.service: Consumed 1.591s CPU time, 129.1M memory peak. Sep 10 23:44:23.153677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:44:23.306463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:44:23.310318 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:44:23.350076 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:44:23.350076 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:44:23.350076 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:44:23.350414 kubelet[2640]: I0910 23:44:23.350128 2640 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:44:23.356624 kubelet[2640]: I0910 23:44:23.355965 2640 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:44:23.356624 kubelet[2640]: I0910 23:44:23.356005 2640 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:44:23.356624 kubelet[2640]: I0910 23:44:23.356416 2640 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:44:23.358066 kubelet[2640]: I0910 23:44:23.358046 2640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:44:23.361682 kubelet[2640]: I0910 23:44:23.361645 2640 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:44:23.365614 kubelet[2640]: I0910 23:44:23.365585 2640 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:44:23.368155 kubelet[2640]: I0910 23:44:23.368137 2640 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:44:23.368342 kubelet[2640]: I0910 23:44:23.368319 2640 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:44:23.368504 kubelet[2640]: I0910 23:44:23.368344 2640 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:44:23.368589 kubelet[2640]: I0910 23:44:23.368511 2640 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:44:23.368589 kubelet[2640]: I0910 23:44:23.368521 2640 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:44:23.368589 kubelet[2640]: I0910 23:44:23.368560 2640 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:44:23.368705 kubelet[2640]: I0910 23:44:23.368693 2640 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:44:23.368734 kubelet[2640]: I0910 23:44:23.368708 2640 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:44:23.368734 kubelet[2640]: I0910 23:44:23.368727 2640 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:44:23.368853 kubelet[2640]: I0910 23:44:23.368736 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:44:23.369739 kubelet[2640]: I0910 23:44:23.369715 2640 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:44:23.370180 kubelet[2640]: I0910 23:44:23.370159 2640 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:44:23.371429 kubelet[2640]: I0910 23:44:23.371393 2640 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:44:23.371493 kubelet[2640]: I0910 23:44:23.371443 2640 server.go:1287] "Started kubelet" Sep 10 23:44:23.373394 kubelet[2640]: I0910 23:44:23.373240 2640 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:44:23.379893 kubelet[2640]: I0910 23:44:23.373992 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:44:23.379893 kubelet[2640]: I0910 23:44:23.374240 2640 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:44:23.379893 kubelet[2640]: I0910 23:44:23.376832 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:44:23.381304 kubelet[2640]: I0910 23:44:23.381257 2640 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:44:23.382867 kubelet[2640]: I0910 23:44:23.382818 2640 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:44:23.384615 kubelet[2640]: E0910 23:44:23.384526 2640 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:44:23.385749 kubelet[2640]: I0910 23:44:23.385693 2640 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:44:23.385884 kubelet[2640]: E0910 23:44:23.385849 2640 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:44:23.386665 kubelet[2640]: I0910 23:44:23.386637 2640 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:44:23.386778 kubelet[2640]: I0910 23:44:23.386761 2640 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:44:23.391915 kubelet[2640]: I0910 23:44:23.391385 2640 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:44:23.391915 kubelet[2640]: I0910 23:44:23.391476 2640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:44:23.396748 kubelet[2640]: I0910 23:44:23.396722 2640 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:44:23.402743 kubelet[2640]: I0910 23:44:23.402698 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:44:23.405622 kubelet[2640]: I0910 23:44:23.405216 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:44:23.405622 kubelet[2640]: I0910 23:44:23.405243 2640 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:44:23.405622 kubelet[2640]: I0910 23:44:23.405265 2640 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:44:23.405622 kubelet[2640]: I0910 23:44:23.405271 2640 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:44:23.405622 kubelet[2640]: E0910 23:44:23.405313 2640 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:44:23.433839 kubelet[2640]: I0910 23:44:23.433812 2640 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:44:23.433839 kubelet[2640]: I0910 23:44:23.433832 2640 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:44:23.433839 kubelet[2640]: I0910 23:44:23.433853 2640 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:44:23.434045 kubelet[2640]: I0910 23:44:23.434017 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:44:23.434074 kubelet[2640]: I0910 23:44:23.434044 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:44:23.434074 kubelet[2640]: I0910 23:44:23.434066 2640 policy_none.go:49] "None policy: Start" Sep 10 23:44:23.434074 kubelet[2640]: I0910 23:44:23.434074 2640 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:44:23.434141 kubelet[2640]: I0910 23:44:23.434084 2640 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:44:23.434192 kubelet[2640]: I0910 23:44:23.434180 2640 state_mem.go:75] "Updated machine memory state" Sep 10 23:44:23.438301 kubelet[2640]: I0910 23:44:23.438218 2640 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:44:23.438715 kubelet[2640]: I0910 23:44:23.438476 2640 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:44:23.438715 kubelet[2640]: I0910 23:44:23.438494 2640 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:44:23.439592 kubelet[2640]: I0910 23:44:23.439470 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:44:23.439592 kubelet[2640]: E0910 23:44:23.439584 2640 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:44:23.506270 kubelet[2640]: I0910 23:44:23.506195 2640 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:44:23.506270 kubelet[2640]: I0910 23:44:23.506216 2640 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:23.506423 kubelet[2640]: I0910 23:44:23.506298 2640 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.541768 kubelet[2640]: I0910 23:44:23.541744 2640 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:44:23.549626 kubelet[2640]: I0910 23:44:23.549246 2640 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:44:23.549626 kubelet[2640]: I0910 23:44:23.549311 2640 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:44:23.588141 kubelet[2640]: I0910 23:44:23.588082 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.588276 kubelet[2640]: I0910 23:44:23.588157 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.588276 kubelet[2640]: I0910 23:44:23.588213 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.588276 kubelet[2640]: I0910 23:44:23.588230 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.588276 kubelet[2640]: I0910 23:44:23.588255 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:23.588276 kubelet[2640]: I0910 23:44:23.588271 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:23.588400 kubelet[2640]: I0910 23:44:23.588287 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6a5770719f8cccdc8c7b30f647d8f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6a5770719f8cccdc8c7b30f647d8f9\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:44:23.588400 kubelet[2640]: I0910 23:44:23.588305 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:44:23.588400 kubelet[2640]: I0910 23:44:23.588320 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:44:24.369333 kubelet[2640]: I0910 23:44:24.369236 2640 apiserver.go:52] "Watching apiserver" Sep 10 23:44:24.386861 kubelet[2640]: I0910 23:44:24.386811 2640 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:44:24.452670 kubelet[2640]: I0910 23:44:24.452505 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.452487287 podStartE2EDuration="1.452487287s" podCreationTimestamp="2025-09-10 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:44:24.442493643 +0000 UTC m=+1.127676739" watchObservedRunningTime="2025-09-10 23:44:24.452487287 +0000 UTC m=+1.137670383" Sep 10 23:44:24.460774 kubelet[2640]: I0910 23:44:24.459951 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.459935207 podStartE2EDuration="1.459935207s" podCreationTimestamp="2025-09-10 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:44:24.452694468 +0000 UTC m=+1.137877564" watchObservedRunningTime="2025-09-10 23:44:24.459935207 +0000 UTC m=+1.145118263" Sep 10 23:44:24.460920 kubelet[2640]: I0910 23:44:24.460809 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.460045301 podStartE2EDuration="1.460045301s" podCreationTimestamp="2025-09-10 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:44:24.459906073 +0000 UTC m=+1.145089209" watchObservedRunningTime="2025-09-10 23:44:24.460045301 +0000 UTC m=+1.145228437" Sep 10 23:44:29.290152 kubelet[2640]: I0910 23:44:29.290118 2640 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:44:29.290501 containerd[1523]: time="2025-09-10T23:44:29.290387237Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:44:29.291355 kubelet[2640]: I0910 23:44:29.290737 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:44:30.231282 systemd[1]: Created slice kubepods-besteffort-poddce12432_a415_4c51_bfea_20ae35d61e2c.slice - libcontainer container kubepods-besteffort-poddce12432_a415_4c51_bfea_20ae35d61e2c.slice. Sep 10 23:44:30.233818 kubelet[2640]: I0910 23:44:30.233781 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dce12432-a415-4c51-bfea-20ae35d61e2c-kube-proxy\") pod \"kube-proxy-wfl9z\" (UID: \"dce12432-a415-4c51-bfea-20ae35d61e2c\") " pod="kube-system/kube-proxy-wfl9z" Sep 10 23:44:30.233818 kubelet[2640]: I0910 23:44:30.233819 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dce12432-a415-4c51-bfea-20ae35d61e2c-lib-modules\") pod \"kube-proxy-wfl9z\" (UID: \"dce12432-a415-4c51-bfea-20ae35d61e2c\") " pod="kube-system/kube-proxy-wfl9z" Sep 10 23:44:30.233939 kubelet[2640]: I0910 23:44:30.233837 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dce12432-a415-4c51-bfea-20ae35d61e2c-xtables-lock\") pod \"kube-proxy-wfl9z\" (UID: \"dce12432-a415-4c51-bfea-20ae35d61e2c\") " pod="kube-system/kube-proxy-wfl9z" Sep 10 23:44:30.233939 kubelet[2640]: I0910 23:44:30.233854 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj4br\" (UniqueName: \"kubernetes.io/projected/dce12432-a415-4c51-bfea-20ae35d61e2c-kube-api-access-fj4br\") pod \"kube-proxy-wfl9z\" (UID: \"dce12432-a415-4c51-bfea-20ae35d61e2c\") " pod="kube-system/kube-proxy-wfl9z" Sep 10 23:44:30.456941 systemd[1]: Created slice kubepods-besteffort-podc948ce4c_845a_4b67_96c7_a8df5d16b3bc.slice - libcontainer container kubepods-besteffort-podc948ce4c_845a_4b67_96c7_a8df5d16b3bc.slice. Sep 10 23:44:30.535094 kubelet[2640]: I0910 23:44:30.534955 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c948ce4c-845a-4b67-96c7-a8df5d16b3bc-var-lib-calico\") pod \"tigera-operator-755d956888-5c2vj\" (UID: \"c948ce4c-845a-4b67-96c7-a8df5d16b3bc\") " pod="tigera-operator/tigera-operator-755d956888-5c2vj" Sep 10 23:44:30.535094 kubelet[2640]: I0910 23:44:30.535036 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4t6\" (UniqueName: \"kubernetes.io/projected/c948ce4c-845a-4b67-96c7-a8df5d16b3bc-kube-api-access-pt4t6\") pod \"tigera-operator-755d956888-5c2vj\" (UID: \"c948ce4c-845a-4b67-96c7-a8df5d16b3bc\") " pod="tigera-operator/tigera-operator-755d956888-5c2vj" Sep 10 23:44:30.551410 containerd[1523]: time="2025-09-10T23:44:30.551318581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfl9z,Uid:dce12432-a415-4c51-bfea-20ae35d61e2c,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:30.569007 containerd[1523]: time="2025-09-10T23:44:30.568955313Z" level=info msg="connecting to shim 9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a" address="unix:///run/containerd/s/42cf148c5f1ca083da3afd8ef3307c665130456c58b94d7591b703ba91daab39" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:30.597814 systemd[1]: Started cri-containerd-9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a.scope - libcontainer container 9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a. Sep 10 23:44:30.626247 containerd[1523]: time="2025-09-10T23:44:30.625950347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wfl9z,Uid:dce12432-a415-4c51-bfea-20ae35d61e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a\"" Sep 10 23:44:30.629678 containerd[1523]: time="2025-09-10T23:44:30.629588034Z" level=info msg="CreateContainer within sandbox \"9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:44:30.643311 containerd[1523]: time="2025-09-10T23:44:30.643199191Z" level=info msg="Container ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:30.654772 containerd[1523]: time="2025-09-10T23:44:30.654717133Z" level=info msg="CreateContainer within sandbox \"9b56217bb76ed0549c1732ffc63b56e446f0f676691f94cb859bf1e1b534799a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86\"" Sep 10 23:44:30.655535 containerd[1523]: time="2025-09-10T23:44:30.655338659Z" level=info msg="StartContainer for \"ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86\"" Sep 10 23:44:30.657105 containerd[1523]: time="2025-09-10T23:44:30.657076596Z" level=info msg="connecting to shim ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86" address="unix:///run/containerd/s/42cf148c5f1ca083da3afd8ef3307c665130456c58b94d7591b703ba91daab39" protocol=ttrpc version=3 Sep 10 23:44:30.679804 systemd[1]: Started cri-containerd-ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86.scope - libcontainer container ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86. Sep 10 23:44:30.712117 containerd[1523]: time="2025-09-10T23:44:30.712077408Z" level=info msg="StartContainer for \"ccb44e773053aa2cd4a7a48eb617bc37dab6aa5f5b17fce38289ad9d9dc62d86\" returns successfully" Sep 10 23:44:30.762778 containerd[1523]: time="2025-09-10T23:44:30.762728296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5c2vj,Uid:c948ce4c-845a-4b67-96c7-a8df5d16b3bc,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:44:30.782193 containerd[1523]: time="2025-09-10T23:44:30.782140018Z" level=info msg="connecting to shim b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d" address="unix:///run/containerd/s/6fbf2cd7b8746444e6c412ad37c4bbc66707392a1c15d4a9ae3a0706bbb5e57b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:30.805190 systemd[1]: Started cri-containerd-b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d.scope - libcontainer container b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d. Sep 10 23:44:30.844420 containerd[1523]: time="2025-09-10T23:44:30.844249589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5c2vj,Uid:c948ce4c-845a-4b67-96c7-a8df5d16b3bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d\"" Sep 10 23:44:30.847085 containerd[1523]: time="2025-09-10T23:44:30.846873179Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:44:31.973298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60840379.mount: Deactivated successfully. Sep 10 23:44:32.654667 containerd[1523]: time="2025-09-10T23:44:32.654618887Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:32.655292 containerd[1523]: time="2025-09-10T23:44:32.655243189Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:44:32.658479 containerd[1523]: time="2025-09-10T23:44:32.658438281Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:32.660863 containerd[1523]: time="2025-09-10T23:44:32.660821376Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:32.661886 containerd[1523]: time="2025-09-10T23:44:32.661849195Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.814573843s" Sep 10 23:44:32.661941 containerd[1523]: time="2025-09-10T23:44:32.661884806Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:44:32.664380 containerd[1523]: time="2025-09-10T23:44:32.664310513Z" level=info msg="CreateContainer within sandbox \"b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:44:32.672202 containerd[1523]: time="2025-09-10T23:44:32.672136396Z" level=info msg="Container daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:32.678984 containerd[1523]: time="2025-09-10T23:44:32.678916453Z" level=info msg="CreateContainer within sandbox \"b94f52e009e0b017252d8f88d801074fba50e4ee180243cc9b47865be17cc68d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c\"" Sep 10 23:44:32.679555 containerd[1523]: time="2025-09-10T23:44:32.679526832Z" level=info msg="StartContainer for \"daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c\"" Sep 10 23:44:32.680768 containerd[1523]: time="2025-09-10T23:44:32.680472027Z" level=info msg="connecting to shim daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c" address="unix:///run/containerd/s/6fbf2cd7b8746444e6c412ad37c4bbc66707392a1c15d4a9ae3a0706bbb5e57b" protocol=ttrpc version=3 Sep 10 23:44:32.705834 systemd[1]: Started cri-containerd-daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c.scope - libcontainer container daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c. Sep 10 23:44:32.739576 containerd[1523]: time="2025-09-10T23:44:32.739531693Z" level=info msg="StartContainer for \"daf89bacb41a41f34837448ec30bf5da74952a8dcf3c837e1bce536198f8a98c\" returns successfully" Sep 10 23:44:33.453717 kubelet[2640]: I0910 23:44:33.453513 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wfl9z" podStartSLOduration=3.453481422 podStartE2EDuration="3.453481422s" podCreationTimestamp="2025-09-10 23:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:44:31.445333736 +0000 UTC m=+8.130516832" watchObservedRunningTime="2025-09-10 23:44:33.453481422 +0000 UTC m=+10.138664518" Sep 10 23:44:33.454538 kubelet[2640]: I0910 23:44:33.453713 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-5c2vj" podStartSLOduration=1.637263816 podStartE2EDuration="3.453706764s" podCreationTimestamp="2025-09-10 23:44:30 +0000 UTC" firstStartedPulling="2025-09-10 23:44:30.846277902 +0000 UTC m=+7.531460998" lastFinishedPulling="2025-09-10 23:44:32.66272085 +0000 UTC m=+9.347903946" observedRunningTime="2025-09-10 23:44:33.453228113 +0000 UTC m=+10.138411209" watchObservedRunningTime="2025-09-10 23:44:33.453706764 +0000 UTC m=+10.138889820" Sep 10 23:44:38.138173 sudo[1730]: pam_unix(sudo:session): session closed for user root Sep 10 23:44:38.140462 sshd[1729]: Connection closed by 10.0.0.1 port 45776 Sep 10 23:44:38.140838 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 10 23:44:38.144841 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:45776.service: Deactivated successfully. Sep 10 23:44:38.146727 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:44:38.148664 systemd[1]: session-7.scope: Consumed 6.373s CPU time, 224.5M memory peak. Sep 10 23:44:38.150268 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:44:38.153383 systemd-logind[1495]: Removed session 7. Sep 10 23:44:39.495692 update_engine[1496]: I20250910 23:44:39.495620 1496 update_attempter.cc:509] Updating boot flags... Sep 10 23:44:41.288639 systemd[1]: Created slice kubepods-besteffort-pod6398981d_66bf_4d91_b2b6_9b55bdc3a983.slice - libcontainer container kubepods-besteffort-pod6398981d_66bf_4d91_b2b6_9b55bdc3a983.slice. Sep 10 23:44:41.306518 kubelet[2640]: I0910 23:44:41.306467 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd7mm\" (UniqueName: \"kubernetes.io/projected/6398981d-66bf-4d91-b2b6-9b55bdc3a983-kube-api-access-vd7mm\") pod \"calico-typha-6b794bc74-dhtpd\" (UID: \"6398981d-66bf-4d91-b2b6-9b55bdc3a983\") " pod="calico-system/calico-typha-6b794bc74-dhtpd" Sep 10 23:44:41.307286 kubelet[2640]: I0910 23:44:41.306528 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6398981d-66bf-4d91-b2b6-9b55bdc3a983-tigera-ca-bundle\") pod \"calico-typha-6b794bc74-dhtpd\" (UID: \"6398981d-66bf-4d91-b2b6-9b55bdc3a983\") " pod="calico-system/calico-typha-6b794bc74-dhtpd" Sep 10 23:44:41.307286 kubelet[2640]: I0910 23:44:41.306552 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6398981d-66bf-4d91-b2b6-9b55bdc3a983-typha-certs\") pod \"calico-typha-6b794bc74-dhtpd\" (UID: \"6398981d-66bf-4d91-b2b6-9b55bdc3a983\") " pod="calico-system/calico-typha-6b794bc74-dhtpd" Sep 10 23:44:41.364474 systemd[1]: Created slice kubepods-besteffort-podd4889e48_17f7_4572_a971_b735d5c6b3f4.slice - libcontainer container kubepods-besteffort-podd4889e48_17f7_4572_a971_b735d5c6b3f4.slice. Sep 10 23:44:41.407272 kubelet[2640]: I0910 23:44:41.407228 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-cni-log-dir\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407272 kubelet[2640]: I0910 23:44:41.407272 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-xtables-lock\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407898 kubelet[2640]: I0910 23:44:41.407289 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2nrm\" (UniqueName: \"kubernetes.io/projected/d4889e48-17f7-4572-a971-b735d5c6b3f4-kube-api-access-r2nrm\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407898 kubelet[2640]: I0910 23:44:41.407325 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-policysync\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407898 kubelet[2640]: I0910 23:44:41.407340 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-var-run-calico\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407898 kubelet[2640]: I0910 23:44:41.407378 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4889e48-17f7-4572-a971-b735d5c6b3f4-tigera-ca-bundle\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.407898 kubelet[2640]: I0910 23:44:41.407395 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-flexvol-driver-host\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.408013 kubelet[2640]: I0910 23:44:41.407418 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-var-lib-calico\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.408013 kubelet[2640]: I0910 23:44:41.407435 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-cni-net-dir\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.408013 kubelet[2640]: I0910 23:44:41.407458 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-cni-bin-dir\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.408013 kubelet[2640]: I0910 23:44:41.407475 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4889e48-17f7-4572-a971-b735d5c6b3f4-lib-modules\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.408013 kubelet[2640]: I0910 23:44:41.407490 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d4889e48-17f7-4572-a971-b735d5c6b3f4-node-certs\") pod \"calico-node-7b87c\" (UID: \"d4889e48-17f7-4572-a971-b735d5c6b3f4\") " pod="calico-system/calico-node-7b87c" Sep 10 23:44:41.515243 kubelet[2640]: E0910 23:44:41.515216 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.515357 kubelet[2640]: W0910 23:44:41.515236 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.517760 kubelet[2640]: E0910 23:44:41.517694 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.520290 kubelet[2640]: E0910 23:44:41.520256 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.520290 kubelet[2640]: W0910 23:44:41.520274 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.520290 kubelet[2640]: E0910 23:44:41.520290 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.524551 kubelet[2640]: E0910 23:44:41.524526 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.524551 kubelet[2640]: W0910 23:44:41.524544 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.524743 kubelet[2640]: E0910 23:44:41.524560 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.565149 kubelet[2640]: E0910 23:44:41.565004 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnr4r" podUID="b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab" Sep 10 23:44:41.594637 containerd[1523]: time="2025-09-10T23:44:41.594575287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b794bc74-dhtpd,Uid:6398981d-66bf-4d91-b2b6-9b55bdc3a983,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:41.595628 kubelet[2640]: E0910 23:44:41.595587 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.595628 kubelet[2640]: W0910 23:44:41.595625 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.595730 kubelet[2640]: E0910 23:44:41.595646 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.595841 kubelet[2640]: E0910 23:44:41.595829 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.595889 kubelet[2640]: W0910 23:44:41.595842 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.595921 kubelet[2640]: E0910 23:44:41.595890 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596048 kubelet[2640]: E0910 23:44:41.596036 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596048 kubelet[2640]: W0910 23:44:41.596047 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.596102 kubelet[2640]: E0910 23:44:41.596056 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596188 kubelet[2640]: E0910 23:44:41.596178 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596214 kubelet[2640]: W0910 23:44:41.596188 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.596214 kubelet[2640]: E0910 23:44:41.596197 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596412 kubelet[2640]: E0910 23:44:41.596397 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596412 kubelet[2640]: W0910 23:44:41.596410 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.596475 kubelet[2640]: E0910 23:44:41.596419 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596588 kubelet[2640]: E0910 23:44:41.596576 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596588 kubelet[2640]: W0910 23:44:41.596589 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.596588 kubelet[2640]: E0910 23:44:41.596612 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596801 kubelet[2640]: E0910 23:44:41.596790 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596801 kubelet[2640]: W0910 23:44:41.596802 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.596861 kubelet[2640]: E0910 23:44:41.596811 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.596969 kubelet[2640]: E0910 23:44:41.596959 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.596969 kubelet[2640]: W0910 23:44:41.596969 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.597026 kubelet[2640]: E0910 23:44:41.596979 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.597168 kubelet[2640]: E0910 23:44:41.597112 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.597168 kubelet[2640]: W0910 23:44:41.597122 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.597168 kubelet[2640]: E0910 23:44:41.597130 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.597650 kubelet[2640]: E0910 23:44:41.597632 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.597650 kubelet[2640]: W0910 23:44:41.597648 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.597722 kubelet[2640]: E0910 23:44:41.597661 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.597972 kubelet[2640]: E0910 23:44:41.597941 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.597972 kubelet[2640]: W0910 23:44:41.597952 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.597972 kubelet[2640]: E0910 23:44:41.597961 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598122 kubelet[2640]: E0910 23:44:41.598111 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598122 kubelet[2640]: W0910 23:44:41.598122 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598178 kubelet[2640]: E0910 23:44:41.598130 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598269 kubelet[2640]: E0910 23:44:41.598259 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598299 kubelet[2640]: W0910 23:44:41.598269 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598299 kubelet[2640]: E0910 23:44:41.598278 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598403 kubelet[2640]: E0910 23:44:41.598393 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598403 kubelet[2640]: W0910 23:44:41.598402 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598459 kubelet[2640]: E0910 23:44:41.598412 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598538 kubelet[2640]: E0910 23:44:41.598520 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598538 kubelet[2640]: W0910 23:44:41.598538 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598605 kubelet[2640]: E0910 23:44:41.598547 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598722 kubelet[2640]: E0910 23:44:41.598711 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598722 kubelet[2640]: W0910 23:44:41.598722 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598766 kubelet[2640]: E0910 23:44:41.598731 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.598914 kubelet[2640]: E0910 23:44:41.598904 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.598914 kubelet[2640]: W0910 23:44:41.598914 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.598966 kubelet[2640]: E0910 23:44:41.598921 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.599043 kubelet[2640]: E0910 23:44:41.599034 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.599074 kubelet[2640]: W0910 23:44:41.599043 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.599074 kubelet[2640]: E0910 23:44:41.599052 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.599192 kubelet[2640]: E0910 23:44:41.599183 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.599192 kubelet[2640]: W0910 23:44:41.599192 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.599249 kubelet[2640]: E0910 23:44:41.599199 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.599324 kubelet[2640]: E0910 23:44:41.599314 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.599324 kubelet[2640]: W0910 23:44:41.599323 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.599369 kubelet[2640]: E0910 23:44:41.599330 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.609912 kubelet[2640]: E0910 23:44:41.609825 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.609912 kubelet[2640]: W0910 23:44:41.609847 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.609912 kubelet[2640]: E0910 23:44:41.609863 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.609912 kubelet[2640]: I0910 23:44:41.609892 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab-varrun\") pod \"csi-node-driver-qnr4r\" (UID: \"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab\") " pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:41.610096 kubelet[2640]: E0910 23:44:41.610070 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.610096 kubelet[2640]: W0910 23:44:41.610080 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.610096 kubelet[2640]: E0910 23:44:41.610089 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.610167 kubelet[2640]: I0910 23:44:41.610103 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab-kubelet-dir\") pod \"csi-node-driver-qnr4r\" (UID: \"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab\") " pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:41.610269 kubelet[2640]: E0910 23:44:41.610255 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.610269 kubelet[2640]: W0910 23:44:41.610267 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.610317 kubelet[2640]: E0910 23:44:41.610281 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.610317 kubelet[2640]: I0910 23:44:41.610296 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab-registration-dir\") pod \"csi-node-driver-qnr4r\" (UID: \"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab\") " pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:41.610457 kubelet[2640]: E0910 23:44:41.610442 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.610457 kubelet[2640]: W0910 23:44:41.610455 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.610515 kubelet[2640]: E0910 23:44:41.610463 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.610515 kubelet[2640]: I0910 23:44:41.610478 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab-socket-dir\") pod \"csi-node-driver-qnr4r\" (UID: \"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab\") " pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:41.610726 kubelet[2640]: E0910 23:44:41.610711 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.610726 kubelet[2640]: W0910 23:44:41.610724 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.610783 kubelet[2640]: E0910 23:44:41.610741 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.610783 kubelet[2640]: I0910 23:44:41.610758 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsq8d\" (UniqueName: \"kubernetes.io/projected/b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab-kube-api-access-rsq8d\") pod \"csi-node-driver-qnr4r\" (UID: \"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab\") " pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:41.610980 kubelet[2640]: E0910 23:44:41.610967 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.610980 kubelet[2640]: W0910 23:44:41.610979 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.611036 kubelet[2640]: E0910 23:44:41.610994 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.611141 kubelet[2640]: E0910 23:44:41.611128 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.611141 kubelet[2640]: W0910 23:44:41.611138 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.611270 kubelet[2640]: E0910 23:44:41.611195 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.611270 kubelet[2640]: E0910 23:44:41.611256 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.611270 kubelet[2640]: W0910 23:44:41.611263 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.611356 kubelet[2640]: E0910 23:44:41.611340 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.611401 kubelet[2640]: E0910 23:44:41.611377 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.611457 kubelet[2640]: W0910 23:44:41.611444 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.611622 kubelet[2640]: E0910 23:44:41.611590 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.611845 kubelet[2640]: E0910 23:44:41.611820 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.611845 kubelet[2640]: W0910 23:44:41.611832 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.612016 kubelet[2640]: E0910 23:44:41.611966 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.612126 kubelet[2640]: E0910 23:44:41.612115 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.612196 kubelet[2640]: W0910 23:44:41.612180 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.612335 kubelet[2640]: E0910 23:44:41.612316 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.612545 kubelet[2640]: E0910 23:44:41.612522 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.612649 kubelet[2640]: W0910 23:44:41.612636 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.612705 kubelet[2640]: E0910 23:44:41.612695 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.612967 kubelet[2640]: E0910 23:44:41.612900 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.612967 kubelet[2640]: W0910 23:44:41.612911 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.612967 kubelet[2640]: E0910 23:44:41.612921 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.613279 kubelet[2640]: E0910 23:44:41.613205 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.613279 kubelet[2640]: W0910 23:44:41.613218 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.613279 kubelet[2640]: E0910 23:44:41.613228 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.613519 kubelet[2640]: E0910 23:44:41.613495 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.613650 kubelet[2640]: W0910 23:44:41.613583 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.613650 kubelet[2640]: E0910 23:44:41.613626 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.669980 containerd[1523]: time="2025-09-10T23:44:41.669175421Z" level=info msg="connecting to shim 596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f" address="unix:///run/containerd/s/e72864c429ec11c9b5a7acad5557ddba56a54b93e4134cedc34d0dbd45f3cb23" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:41.671746 containerd[1523]: time="2025-09-10T23:44:41.670265559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b87c,Uid:d4889e48-17f7-4572-a971-b735d5c6b3f4,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:41.712412 kubelet[2640]: E0910 23:44:41.712241 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.712412 kubelet[2640]: W0910 23:44:41.712266 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.712412 kubelet[2640]: E0910 23:44:41.712287 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.728521 kubelet[2640]: E0910 23:44:41.728411 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.728521 kubelet[2640]: W0910 23:44:41.728436 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.728521 kubelet[2640]: E0910 23:44:41.728465 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.729799 kubelet[2640]: E0910 23:44:41.729654 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.729799 kubelet[2640]: W0910 23:44:41.729671 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.729799 kubelet[2640]: E0910 23:44:41.729694 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.730076 kubelet[2640]: E0910 23:44:41.729976 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.730076 kubelet[2640]: W0910 23:44:41.729992 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.730153 kubelet[2640]: E0910 23:44:41.730054 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.730581 kubelet[2640]: E0910 23:44:41.730352 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.730581 kubelet[2640]: W0910 23:44:41.730363 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.730581 kubelet[2640]: E0910 23:44:41.730383 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.731278 kubelet[2640]: E0910 23:44:41.731261 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.731278 kubelet[2640]: W0910 23:44:41.731276 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.731462 kubelet[2640]: E0910 23:44:41.731440 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.732052 kubelet[2640]: E0910 23:44:41.731960 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.732052 kubelet[2640]: W0910 23:44:41.731979 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.732052 kubelet[2640]: E0910 23:44:41.732003 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.732759 kubelet[2640]: E0910 23:44:41.732742 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.732846 kubelet[2640]: W0910 23:44:41.732833 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.734465 kubelet[2640]: E0910 23:44:41.733907 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.735851 kubelet[2640]: E0910 23:44:41.735716 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.735851 kubelet[2640]: W0910 23:44:41.735735 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.735851 kubelet[2640]: E0910 23:44:41.735758 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.736389 kubelet[2640]: E0910 23:44:41.736193 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.736389 kubelet[2640]: W0910 23:44:41.736211 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.736389 kubelet[2640]: E0910 23:44:41.736229 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.736857 kubelet[2640]: E0910 23:44:41.736752 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.736857 kubelet[2640]: W0910 23:44:41.736768 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.736857 kubelet[2640]: E0910 23:44:41.736789 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.737193 kubelet[2640]: E0910 23:44:41.737177 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.737193 kubelet[2640]: W0910 23:44:41.737192 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.737417 kubelet[2640]: E0910 23:44:41.737209 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.738145 kubelet[2640]: E0910 23:44:41.738112 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.738586 kubelet[2640]: W0910 23:44:41.738237 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.738586 kubelet[2640]: E0910 23:44:41.738488 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.747061 kubelet[2640]: E0910 23:44:41.746942 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.747061 kubelet[2640]: W0910 23:44:41.746969 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.747061 kubelet[2640]: E0910 23:44:41.747008 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.747773 kubelet[2640]: E0910 23:44:41.747755 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.747932 kubelet[2640]: W0910 23:44:41.747863 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.747932 kubelet[2640]: E0910 23:44:41.747885 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.748433 kubelet[2640]: E0910 23:44:41.748265 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.748433 kubelet[2640]: W0910 23:44:41.748279 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.748433 kubelet[2640]: E0910 23:44:41.748318 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.749049 kubelet[2640]: E0910 23:44:41.749014 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.749049 kubelet[2640]: W0910 23:44:41.749028 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.749751 kubelet[2640]: E0910 23:44:41.749655 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.749889 kubelet[2640]: E0910 23:44:41.749877 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.750079 kubelet[2640]: W0910 23:44:41.749939 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.750079 kubelet[2640]: E0910 23:44:41.749999 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.750576 kubelet[2640]: E0910 23:44:41.750559 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.750681 kubelet[2640]: W0910 23:44:41.750668 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.750835 kubelet[2640]: E0910 23:44:41.750804 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.751298 kubelet[2640]: E0910 23:44:41.751172 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.751298 kubelet[2640]: W0910 23:44:41.751187 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.751298 kubelet[2640]: E0910 23:44:41.751206 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.752056 kubelet[2640]: E0910 23:44:41.752040 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.752173 kubelet[2640]: W0910 23:44:41.752159 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.752397 kubelet[2640]: E0910 23:44:41.752351 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.756212 kubelet[2640]: E0910 23:44:41.756188 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.756881 kubelet[2640]: W0910 23:44:41.756758 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.756881 kubelet[2640]: E0910 23:44:41.756830 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.758411 kubelet[2640]: E0910 23:44:41.758391 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.758606 kubelet[2640]: W0910 23:44:41.758529 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.758606 kubelet[2640]: E0910 23:44:41.758575 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.759484 kubelet[2640]: E0910 23:44:41.759170 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.759484 kubelet[2640]: W0910 23:44:41.759187 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.759484 kubelet[2640]: E0910 23:44:41.759200 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.759767 kubelet[2640]: E0910 23:44:41.759744 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.759846 kubelet[2640]: W0910 23:44:41.759832 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.759918 kubelet[2640]: E0910 23:44:41.759888 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.760852 containerd[1523]: time="2025-09-10T23:44:41.760794972Z" level=info msg="connecting to shim 074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f" address="unix:///run/containerd/s/93ddaef7615c11935eb695eb7e80b96ac30ac6a85ea2ba54bce0f024e6cc66a6" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:41.785805 systemd[1]: Started cri-containerd-596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f.scope - libcontainer container 596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f. Sep 10 23:44:41.791246 kubelet[2640]: E0910 23:44:41.791217 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:41.791246 kubelet[2640]: W0910 23:44:41.791237 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:41.791246 kubelet[2640]: E0910 23:44:41.791256 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:41.845809 systemd[1]: Started cri-containerd-074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f.scope - libcontainer container 074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f. Sep 10 23:44:41.884127 containerd[1523]: time="2025-09-10T23:44:41.884074250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b794bc74-dhtpd,Uid:6398981d-66bf-4d91-b2b6-9b55bdc3a983,Namespace:calico-system,Attempt:0,} returns sandbox id \"596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f\"" Sep 10 23:44:41.894878 containerd[1523]: time="2025-09-10T23:44:41.894838087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:44:41.912080 containerd[1523]: time="2025-09-10T23:44:41.912037054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b87c,Uid:d4889e48-17f7-4572-a971-b735d5c6b3f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\"" Sep 10 23:44:42.863755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252806976.mount: Deactivated successfully. Sep 10 23:44:43.410120 kubelet[2640]: E0910 23:44:43.410065 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnr4r" podUID="b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab" Sep 10 23:44:43.788673 containerd[1523]: time="2025-09-10T23:44:43.788524993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:43.789211 containerd[1523]: time="2025-09-10T23:44:43.789148602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:44:43.793744 containerd[1523]: time="2025-09-10T23:44:43.793684173Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:43.796554 containerd[1523]: time="2025-09-10T23:44:43.796490455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:43.815706 containerd[1523]: time="2025-09-10T23:44:43.815176775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.9202888s" Sep 10 23:44:43.815706 containerd[1523]: time="2025-09-10T23:44:43.815236464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:44:43.816876 containerd[1523]: time="2025-09-10T23:44:43.816827092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:44:43.828440 containerd[1523]: time="2025-09-10T23:44:43.828399192Z" level=info msg="CreateContainer within sandbox \"596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:44:43.836646 containerd[1523]: time="2025-09-10T23:44:43.835922431Z" level=info msg="Container 5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:43.842367 containerd[1523]: time="2025-09-10T23:44:43.842303386Z" level=info msg="CreateContainer within sandbox \"596560f835cbaa7fd2d590f0f3dfb5f743c77b40527205bbbe68e83137691d0f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445\"" Sep 10 23:44:43.844873 containerd[1523]: time="2025-09-10T23:44:43.844807225Z" level=info msg="StartContainer for \"5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445\"" Sep 10 23:44:43.846129 containerd[1523]: time="2025-09-10T23:44:43.846102691Z" level=info msg="connecting to shim 5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445" address="unix:///run/containerd/s/e72864c429ec11c9b5a7acad5557ddba56a54b93e4134cedc34d0dbd45f3cb23" protocol=ttrpc version=3 Sep 10 23:44:43.884800 systemd[1]: Started cri-containerd-5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445.scope - libcontainer container 5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445. Sep 10 23:44:43.952254 containerd[1523]: time="2025-09-10T23:44:43.952213711Z" level=info msg="StartContainer for \"5d84560c4e3d430c173256650171f98707924d55f2d8508403c421cc917e3445\" returns successfully" Sep 10 23:44:44.514892 kubelet[2640]: E0910 23:44:44.514861 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.514892 kubelet[2640]: W0910 23:44:44.514885 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515323 kubelet[2640]: E0910 23:44:44.514907 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.515323 kubelet[2640]: E0910 23:44:44.515123 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.515323 kubelet[2640]: W0910 23:44:44.515133 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515323 kubelet[2640]: E0910 23:44:44.515176 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.515415 kubelet[2640]: E0910 23:44:44.515358 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.515415 kubelet[2640]: W0910 23:44:44.515368 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515415 kubelet[2640]: E0910 23:44:44.515377 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.515528 kubelet[2640]: E0910 23:44:44.515511 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.515528 kubelet[2640]: W0910 23:44:44.515523 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515576 kubelet[2640]: E0910 23:44:44.515532 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.515734 kubelet[2640]: E0910 23:44:44.515699 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.515734 kubelet[2640]: W0910 23:44:44.515714 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515734 kubelet[2640]: E0910 23:44:44.515723 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.515887 kubelet[2640]: E0910 23:44:44.515873 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.515887 kubelet[2640]: W0910 23:44:44.515886 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.515934 kubelet[2640]: E0910 23:44:44.515894 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.516052 kubelet[2640]: E0910 23:44:44.516039 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.516077 kubelet[2640]: W0910 23:44:44.516054 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.516077 kubelet[2640]: E0910 23:44:44.516063 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.516221 kubelet[2640]: E0910 23:44:44.516208 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.516221 kubelet[2640]: W0910 23:44:44.516220 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.516274 kubelet[2640]: E0910 23:44:44.516228 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.516452 kubelet[2640]: E0910 23:44:44.516438 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.516452 kubelet[2640]: W0910 23:44:44.516450 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.516521 kubelet[2640]: E0910 23:44:44.516458 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.517294 kubelet[2640]: E0910 23:44:44.517273 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.517294 kubelet[2640]: W0910 23:44:44.517293 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.517378 kubelet[2640]: E0910 23:44:44.517307 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.521176 kubelet[2640]: E0910 23:44:44.521129 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.521176 kubelet[2640]: W0910 23:44:44.521148 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.521176 kubelet[2640]: E0910 23:44:44.521165 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.521355 kubelet[2640]: E0910 23:44:44.521337 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.521355 kubelet[2640]: W0910 23:44:44.521349 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.521409 kubelet[2640]: E0910 23:44:44.521357 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.521532 kubelet[2640]: E0910 23:44:44.521519 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.521555 kubelet[2640]: W0910 23:44:44.521531 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.521555 kubelet[2640]: E0910 23:44:44.521539 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.521769 kubelet[2640]: E0910 23:44:44.521754 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.521769 kubelet[2640]: W0910 23:44:44.521766 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.521829 kubelet[2640]: E0910 23:44:44.521775 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.521919 kubelet[2640]: E0910 23:44:44.521907 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.521919 kubelet[2640]: W0910 23:44:44.521917 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.521978 kubelet[2640]: E0910 23:44:44.521926 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.570092 kubelet[2640]: E0910 23:44:44.570046 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.570092 kubelet[2640]: W0910 23:44:44.570070 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.570092 kubelet[2640]: E0910 23:44:44.570097 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.570279 kubelet[2640]: E0910 23:44:44.570270 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.570305 kubelet[2640]: W0910 23:44:44.570278 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.570331 kubelet[2640]: E0910 23:44:44.570306 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.571706 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.572643 kubelet[2640]: W0910 23:44:44.571728 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.571749 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.571977 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.572643 kubelet[2640]: W0910 23:44:44.571987 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.572040 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.572166 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.572643 kubelet[2640]: W0910 23:44:44.572173 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.572224 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.572643 kubelet[2640]: E0910 23:44:44.572328 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573000 kubelet[2640]: W0910 23:44:44.572335 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572371 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572469 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573000 kubelet[2640]: W0910 23:44:44.572477 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572488 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572590 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573000 kubelet[2640]: W0910 23:44:44.572621 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572634 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573000 kubelet[2640]: E0910 23:44:44.572996 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573172 kubelet[2640]: W0910 23:44:44.573008 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573172 kubelet[2640]: E0910 23:44:44.573029 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573248 kubelet[2640]: E0910 23:44:44.573225 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573248 kubelet[2640]: W0910 23:44:44.573244 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573292 kubelet[2640]: E0910 23:44:44.573263 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573634 kubelet[2640]: E0910 23:44:44.573381 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573634 kubelet[2640]: W0910 23:44:44.573391 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573634 kubelet[2640]: E0910 23:44:44.573399 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.573634 kubelet[2640]: E0910 23:44:44.573525 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.573634 kubelet[2640]: W0910 23:44:44.573532 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.573634 kubelet[2640]: E0910 23:44:44.573540 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.573819 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.575701 kubelet[2640]: W0910 23:44:44.573828 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.573837 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.573971 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.575701 kubelet[2640]: W0910 23:44:44.573979 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.573986 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.574371 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.575701 kubelet[2640]: W0910 23:44:44.574382 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.575701 kubelet[2640]: E0910 23:44:44.574392 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.576056 kubelet[2640]: E0910 23:44:44.575732 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.576056 kubelet[2640]: W0910 23:44:44.575745 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.576056 kubelet[2640]: E0910 23:44:44.575762 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.576116 kubelet[2640]: E0910 23:44:44.576069 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.576116 kubelet[2640]: W0910 23:44:44.576082 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.576116 kubelet[2640]: E0910 23:44:44.576103 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.576268 kubelet[2640]: E0910 23:44:44.576249 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:44:44.576268 kubelet[2640]: W0910 23:44:44.576260 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:44:44.576268 kubelet[2640]: E0910 23:44:44.576269 2640 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:44:44.944308 containerd[1523]: time="2025-09-10T23:44:44.944259312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:44.944916 containerd[1523]: time="2025-09-10T23:44:44.944872875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:44:44.945764 containerd[1523]: time="2025-09-10T23:44:44.945698746Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:44.947729 containerd[1523]: time="2025-09-10T23:44:44.947671371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:44.948242 containerd[1523]: time="2025-09-10T23:44:44.948207483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.131335025s" Sep 10 23:44:44.948291 containerd[1523]: time="2025-09-10T23:44:44.948245889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:44:44.951631 containerd[1523]: time="2025-09-10T23:44:44.951489325Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:44:44.960391 containerd[1523]: time="2025-09-10T23:44:44.960354317Z" level=info msg="Container 1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:44.970627 containerd[1523]: time="2025-09-10T23:44:44.970487959Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\"" Sep 10 23:44:44.971255 containerd[1523]: time="2025-09-10T23:44:44.971230619Z" level=info msg="StartContainer for \"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\"" Sep 10 23:44:44.973128 containerd[1523]: time="2025-09-10T23:44:44.973097910Z" level=info msg="connecting to shim 1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1" address="unix:///run/containerd/s/93ddaef7615c11935eb695eb7e80b96ac30ac6a85ea2ba54bce0f024e6cc66a6" protocol=ttrpc version=3 Sep 10 23:44:45.000828 systemd[1]: Started cri-containerd-1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1.scope - libcontainer container 1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1. Sep 10 23:44:45.064551 containerd[1523]: time="2025-09-10T23:44:45.064508314Z" level=info msg="StartContainer for \"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\" returns successfully" Sep 10 23:44:45.082904 systemd[1]: cri-containerd-1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1.scope: Deactivated successfully. Sep 10 23:44:45.113498 containerd[1523]: time="2025-09-10T23:44:45.113262940Z" level=info msg="received exit event container_id:\"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\" id:\"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\" pid:3342 exited_at:{seconds:1757547885 nanos:99718672}" Sep 10 23:44:45.118260 containerd[1523]: time="2025-09-10T23:44:45.118202443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\" id:\"1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1\" pid:3342 exited_at:{seconds:1757547885 nanos:99718672}" Sep 10 23:44:45.155232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1be78c813029d1506f698c12fae6677b9c5423688af67a786639dcd89ea0c7e1-rootfs.mount: Deactivated successfully. Sep 10 23:44:45.406815 kubelet[2640]: E0910 23:44:45.406668 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnr4r" podUID="b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab" Sep 10 23:44:45.473693 kubelet[2640]: I0910 23:44:45.473149 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:44:45.476149 containerd[1523]: time="2025-09-10T23:44:45.475971905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:44:45.490820 kubelet[2640]: I0910 23:44:45.490563 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b794bc74-dhtpd" podStartSLOduration=2.568041168 podStartE2EDuration="4.490547062s" podCreationTimestamp="2025-09-10 23:44:41 +0000 UTC" firstStartedPulling="2025-09-10 23:44:41.894105167 +0000 UTC m=+18.579288263" lastFinishedPulling="2025-09-10 23:44:43.816610981 +0000 UTC m=+20.501794157" observedRunningTime="2025-09-10 23:44:44.479460012 +0000 UTC m=+21.164643188" watchObservedRunningTime="2025-09-10 23:44:45.490547062 +0000 UTC m=+22.175730158" Sep 10 23:44:47.406836 kubelet[2640]: E0910 23:44:47.406796 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnr4r" podUID="b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab" Sep 10 23:44:48.151899 containerd[1523]: time="2025-09-10T23:44:48.151843918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:48.152512 containerd[1523]: time="2025-09-10T23:44:48.152459862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:44:48.153669 containerd[1523]: time="2025-09-10T23:44:48.153575858Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:48.155220 containerd[1523]: time="2025-09-10T23:44:48.155181425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:48.156635 containerd[1523]: time="2025-09-10T23:44:48.156589211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.680575861s" Sep 10 23:44:48.156635 containerd[1523]: time="2025-09-10T23:44:48.156632136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:44:48.158624 containerd[1523]: time="2025-09-10T23:44:48.158574937Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:44:48.167089 containerd[1523]: time="2025-09-10T23:44:48.166492720Z" level=info msg="Container dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:48.170816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195000447.mount: Deactivated successfully. Sep 10 23:44:48.177291 containerd[1523]: time="2025-09-10T23:44:48.177235996Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\"" Sep 10 23:44:48.177863 containerd[1523]: time="2025-09-10T23:44:48.177722046Z" level=info msg="StartContainer for \"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\"" Sep 10 23:44:48.179493 containerd[1523]: time="2025-09-10T23:44:48.179457427Z" level=info msg="connecting to shim dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de" address="unix:///run/containerd/s/93ddaef7615c11935eb695eb7e80b96ac30ac6a85ea2ba54bce0f024e6cc66a6" protocol=ttrpc version=3 Sep 10 23:44:48.200760 systemd[1]: Started cri-containerd-dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de.scope - libcontainer container dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de. Sep 10 23:44:48.260541 containerd[1523]: time="2025-09-10T23:44:48.260491404Z" level=info msg="StartContainer for \"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\" returns successfully" Sep 10 23:44:48.767507 systemd[1]: cri-containerd-dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de.scope: Deactivated successfully. Sep 10 23:44:48.767901 systemd[1]: cri-containerd-dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de.scope: Consumed 461ms CPU time, 173.1M memory peak, 3.2M read from disk, 165.8M written to disk. Sep 10 23:44:48.772923 kubelet[2640]: I0910 23:44:48.772193 2640 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:44:48.773258 containerd[1523]: time="2025-09-10T23:44:48.772889231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\" id:\"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\" pid:3402 exited_at:{seconds:1757547888 nanos:771870085}" Sep 10 23:44:48.777081 containerd[1523]: time="2025-09-10T23:44:48.777023620Z" level=info msg="received exit event container_id:\"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\" id:\"dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de\" pid:3402 exited_at:{seconds:1757547888 nanos:771870085}" Sep 10 23:44:48.818950 systemd[1]: Created slice kubepods-besteffort-pod0a91db49_7831_4a12_8c92_c0e983e8451d.slice - libcontainer container kubepods-besteffort-pod0a91db49_7831_4a12_8c92_c0e983e8451d.slice. Sep 10 23:44:48.829350 systemd[1]: Created slice kubepods-burstable-pod198e9461_1076_4ae3_96d7_cb34713b6cd8.slice - libcontainer container kubepods-burstable-pod198e9461_1076_4ae3_96d7_cb34713b6cd8.slice. Sep 10 23:44:48.836051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbee9aee06d5e54b29ff0a7d6e09fb3e39343c4126bc74e93a9d4ad21a0132de-rootfs.mount: Deactivated successfully. Sep 10 23:44:48.842504 systemd[1]: Created slice kubepods-burstable-podba86c2d5_1ce7_42cf_883e_a99585fe4180.slice - libcontainer container kubepods-burstable-podba86c2d5_1ce7_42cf_883e_a99585fe4180.slice. Sep 10 23:44:48.850514 systemd[1]: Created slice kubepods-besteffort-pod91562592_1b1f_4278_b6a6_71c79990837e.slice - libcontainer container kubepods-besteffort-pod91562592_1b1f_4278_b6a6_71c79990837e.slice. Sep 10 23:44:48.858143 systemd[1]: Created slice kubepods-besteffort-pod151d71d3_c5e5_45ea_8230_624b6527f1e8.slice - libcontainer container kubepods-besteffort-pod151d71d3_c5e5_45ea_8230_624b6527f1e8.slice. Sep 10 23:44:48.865825 systemd[1]: Created slice kubepods-besteffort-pod17b7ffaf_afae_4058_99ef_1152edfa4efa.slice - libcontainer container kubepods-besteffort-pod17b7ffaf_afae_4058_99ef_1152edfa4efa.slice. Sep 10 23:44:48.870762 systemd[1]: Created slice kubepods-besteffort-pod6343e5ee_80d9_43c3_abc2_8e7a580d6e68.slice - libcontainer container kubepods-besteffort-pod6343e5ee_80d9_43c3_abc2_8e7a580d6e68.slice. Sep 10 23:44:48.898395 kubelet[2640]: I0910 23:44:48.898268 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkdn\" (UniqueName: \"kubernetes.io/projected/0a91db49-7831-4a12-8c92-c0e983e8451d-kube-api-access-vfkdn\") pod \"whisker-5d9f84d5bb-n4b9k\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " pod="calico-system/whisker-5d9f84d5bb-n4b9k" Sep 10 23:44:48.898395 kubelet[2640]: I0910 23:44:48.898316 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-backend-key-pair\") pod \"whisker-5d9f84d5bb-n4b9k\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " pod="calico-system/whisker-5d9f84d5bb-n4b9k" Sep 10 23:44:48.898395 kubelet[2640]: I0910 23:44:48.898335 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-ca-bundle\") pod \"whisker-5d9f84d5bb-n4b9k\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " pod="calico-system/whisker-5d9f84d5bb-n4b9k" Sep 10 23:44:48.898395 kubelet[2640]: I0910 23:44:48.898355 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jjk5\" (UniqueName: \"kubernetes.io/projected/6343e5ee-80d9-43c3-abc2-8e7a580d6e68-kube-api-access-5jjk5\") pod \"goldmane-54d579b49d-g4xj7\" (UID: \"6343e5ee-80d9-43c3-abc2-8e7a580d6e68\") " pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:48.898672 kubelet[2640]: I0910 23:44:48.898398 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba86c2d5-1ce7-42cf-883e-a99585fe4180-config-volume\") pod \"coredns-668d6bf9bc-4zbv8\" (UID: \"ba86c2d5-1ce7-42cf-883e-a99585fe4180\") " pod="kube-system/coredns-668d6bf9bc-4zbv8" Sep 10 23:44:48.898672 kubelet[2640]: I0910 23:44:48.898464 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqm7w\" (UniqueName: \"kubernetes.io/projected/151d71d3-c5e5-45ea-8230-624b6527f1e8-kube-api-access-vqm7w\") pod \"calico-kube-controllers-766c76848b-jnpzb\" (UID: \"151d71d3-c5e5-45ea-8230-624b6527f1e8\") " pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" Sep 10 23:44:48.898672 kubelet[2640]: I0910 23:44:48.898512 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6343e5ee-80d9-43c3-abc2-8e7a580d6e68-config\") pod \"goldmane-54d579b49d-g4xj7\" (UID: \"6343e5ee-80d9-43c3-abc2-8e7a580d6e68\") " pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:48.898672 kubelet[2640]: I0910 23:44:48.898533 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkxj\" (UniqueName: \"kubernetes.io/projected/91562592-1b1f-4278-b6a6-71c79990837e-kube-api-access-rlkxj\") pod \"calico-apiserver-79fbf74b84-vh2hg\" (UID: \"91562592-1b1f-4278-b6a6-71c79990837e\") " pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" Sep 10 23:44:48.898672 kubelet[2640]: I0910 23:44:48.898551 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6343e5ee-80d9-43c3-abc2-8e7a580d6e68-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-g4xj7\" (UID: \"6343e5ee-80d9-43c3-abc2-8e7a580d6e68\") " pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:48.898797 kubelet[2640]: I0910 23:44:48.898576 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/198e9461-1076-4ae3-96d7-cb34713b6cd8-config-volume\") pod \"coredns-668d6bf9bc-fbfhs\" (UID: \"198e9461-1076-4ae3-96d7-cb34713b6cd8\") " pod="kube-system/coredns-668d6bf9bc-fbfhs" Sep 10 23:44:48.898797 kubelet[2640]: I0910 23:44:48.898608 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbch\" (UniqueName: \"kubernetes.io/projected/198e9461-1076-4ae3-96d7-cb34713b6cd8-kube-api-access-8tbch\") pod \"coredns-668d6bf9bc-fbfhs\" (UID: \"198e9461-1076-4ae3-96d7-cb34713b6cd8\") " pod="kube-system/coredns-668d6bf9bc-fbfhs" Sep 10 23:44:48.898797 kubelet[2640]: I0910 23:44:48.898627 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17b7ffaf-afae-4058-99ef-1152edfa4efa-calico-apiserver-certs\") pod \"calico-apiserver-79fbf74b84-8vgwp\" (UID: \"17b7ffaf-afae-4058-99ef-1152edfa4efa\") " pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" Sep 10 23:44:48.898797 kubelet[2640]: I0910 23:44:48.898650 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4czc\" (UniqueName: \"kubernetes.io/projected/17b7ffaf-afae-4058-99ef-1152edfa4efa-kube-api-access-g4czc\") pod \"calico-apiserver-79fbf74b84-8vgwp\" (UID: \"17b7ffaf-afae-4058-99ef-1152edfa4efa\") " pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" Sep 10 23:44:48.898797 kubelet[2640]: I0910 23:44:48.898666 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6343e5ee-80d9-43c3-abc2-8e7a580d6e68-goldmane-key-pair\") pod \"goldmane-54d579b49d-g4xj7\" (UID: \"6343e5ee-80d9-43c3-abc2-8e7a580d6e68\") " pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:48.898900 kubelet[2640]: I0910 23:44:48.898688 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/151d71d3-c5e5-45ea-8230-624b6527f1e8-tigera-ca-bundle\") pod \"calico-kube-controllers-766c76848b-jnpzb\" (UID: \"151d71d3-c5e5-45ea-8230-624b6527f1e8\") " pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" Sep 10 23:44:48.898900 kubelet[2640]: I0910 23:44:48.898704 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91562592-1b1f-4278-b6a6-71c79990837e-calico-apiserver-certs\") pod \"calico-apiserver-79fbf74b84-vh2hg\" (UID: \"91562592-1b1f-4278-b6a6-71c79990837e\") " pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" Sep 10 23:44:48.898900 kubelet[2640]: I0910 23:44:48.898727 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzd9\" (UniqueName: \"kubernetes.io/projected/ba86c2d5-1ce7-42cf-883e-a99585fe4180-kube-api-access-mzzd9\") pod \"coredns-668d6bf9bc-4zbv8\" (UID: \"ba86c2d5-1ce7-42cf-883e-a99585fe4180\") " pod="kube-system/coredns-668d6bf9bc-4zbv8" Sep 10 23:44:49.125404 containerd[1523]: time="2025-09-10T23:44:49.125366635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9f84d5bb-n4b9k,Uid:0a91db49-7831-4a12-8c92-c0e983e8451d,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:49.135419 containerd[1523]: time="2025-09-10T23:44:49.135376410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fbfhs,Uid:198e9461-1076-4ae3-96d7-cb34713b6cd8,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:49.150957 containerd[1523]: time="2025-09-10T23:44:49.150615574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zbv8,Uid:ba86c2d5-1ce7-42cf-883e-a99585fe4180,Namespace:kube-system,Attempt:0,}" Sep 10 23:44:49.157156 containerd[1523]: time="2025-09-10T23:44:49.157111766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-vh2hg,Uid:91562592-1b1f-4278-b6a6-71c79990837e,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:44:49.186065 containerd[1523]: time="2025-09-10T23:44:49.185956335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-8vgwp,Uid:17b7ffaf-afae-4058-99ef-1152edfa4efa,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:44:49.186511 containerd[1523]: time="2025-09-10T23:44:49.186482227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c76848b-jnpzb,Uid:151d71d3-c5e5-45ea-8230-624b6527f1e8,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:49.186634 containerd[1523]: time="2025-09-10T23:44:49.186612239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g4xj7,Uid:6343e5ee-80d9-43c3-abc2-8e7a580d6e68,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:49.304614 containerd[1523]: time="2025-09-10T23:44:49.304546324Z" level=error msg="Failed to destroy network for sandbox \"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.307385 systemd[1]: run-netns-cni\x2dde038467\x2d28ef\x2d466f\x2dfcaa\x2d18f0db45d182.mount: Deactivated successfully. Sep 10 23:44:49.315425 containerd[1523]: time="2025-09-10T23:44:49.315353177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9f84d5bb-n4b9k,Uid:0a91db49-7831-4a12-8c92-c0e983e8451d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.316973 kubelet[2640]: E0910 23:44:49.316903 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.319682 kubelet[2640]: E0910 23:44:49.319617 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9f84d5bb-n4b9k" Sep 10 23:44:49.319682 kubelet[2640]: E0910 23:44:49.319677 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d9f84d5bb-n4b9k" Sep 10 23:44:49.319799 kubelet[2640]: E0910 23:44:49.319727 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d9f84d5bb-n4b9k_calico-system(0a91db49-7831-4a12-8c92-c0e983e8451d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d9f84d5bb-n4b9k_calico-system(0a91db49-7831-4a12-8c92-c0e983e8451d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b68f045ad49371e0c16414443f96b8a86c4828d3e96f57a8d5b0b9fb2b601f99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d9f84d5bb-n4b9k" podUID="0a91db49-7831-4a12-8c92-c0e983e8451d" Sep 10 23:44:49.326752 containerd[1523]: time="2025-09-10T23:44:49.326647517Z" level=error msg="Failed to destroy network for sandbox \"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.329552 containerd[1523]: time="2025-09-10T23:44:49.329497074Z" level=error msg="Failed to destroy network for sandbox \"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.330409 containerd[1523]: time="2025-09-10T23:44:49.330382520Z" level=error msg="Failed to destroy network for sandbox \"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.330933 containerd[1523]: time="2025-09-10T23:44:49.330882969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-8vgwp,Uid:17b7ffaf-afae-4058-99ef-1152edfa4efa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.331552 kubelet[2640]: E0910 23:44:49.331502 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.331859 kubelet[2640]: E0910 23:44:49.331559 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" Sep 10 23:44:49.331859 kubelet[2640]: E0910 23:44:49.331578 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" Sep 10 23:44:49.331859 kubelet[2640]: E0910 23:44:49.331727 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79fbf74b84-8vgwp_calico-apiserver(17b7ffaf-afae-4058-99ef-1152edfa4efa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79fbf74b84-8vgwp_calico-apiserver(17b7ffaf-afae-4058-99ef-1152edfa4efa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ebf383d65c606e38dd1886b1e4ec6533cafaac0fc15fae1751ca8232499d6af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" podUID="17b7ffaf-afae-4058-99ef-1152edfa4efa" Sep 10 23:44:49.333166 containerd[1523]: time="2025-09-10T23:44:49.333111666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g4xj7,Uid:6343e5ee-80d9-43c3-abc2-8e7a580d6e68,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.333938 kubelet[2640]: E0910 23:44:49.333810 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.334132 kubelet[2640]: E0910 23:44:49.334075 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:49.334219 kubelet[2640]: E0910 23:44:49.334196 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-g4xj7" Sep 10 23:44:49.334454 kubelet[2640]: E0910 23:44:49.334350 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-g4xj7_calico-system(6343e5ee-80d9-43c3-abc2-8e7a580d6e68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-g4xj7_calico-system(6343e5ee-80d9-43c3-abc2-8e7a580d6e68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2687e043cb5f7afb4de60fc8a434c7a56d0dde19224a970416c9109362417dac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-g4xj7" podUID="6343e5ee-80d9-43c3-abc2-8e7a580d6e68" Sep 10 23:44:49.334679 kubelet[2640]: E0910 23:44:49.334646 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.334719 containerd[1523]: time="2025-09-10T23:44:49.334497721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-vh2hg,Uid:91562592-1b1f-4278-b6a6-71c79990837e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.334760 kubelet[2640]: E0910 23:44:49.334680 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" Sep 10 23:44:49.334760 kubelet[2640]: E0910 23:44:49.334697 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" Sep 10 23:44:49.334760 kubelet[2640]: E0910 23:44:49.334725 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79fbf74b84-vh2hg_calico-apiserver(91562592-1b1f-4278-b6a6-71c79990837e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79fbf74b84-vh2hg_calico-apiserver(91562592-1b1f-4278-b6a6-71c79990837e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f53ee86b953804595697547f3be5dfe0ed4f18b628f35523272ce4e85049d492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" podUID="91562592-1b1f-4278-b6a6-71c79990837e" Sep 10 23:44:49.337750 containerd[1523]: time="2025-09-10T23:44:49.337692072Z" level=error msg="Failed to destroy network for sandbox \"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.338832 containerd[1523]: time="2025-09-10T23:44:49.338718572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c76848b-jnpzb,Uid:151d71d3-c5e5-45ea-8230-624b6527f1e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.339687 kubelet[2640]: E0910 23:44:49.339657 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.339928 kubelet[2640]: E0910 23:44:49.339830 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" Sep 10 23:44:49.339928 kubelet[2640]: E0910 23:44:49.339870 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" Sep 10 23:44:49.340022 kubelet[2640]: E0910 23:44:49.339908 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766c76848b-jnpzb_calico-system(151d71d3-c5e5-45ea-8230-624b6527f1e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766c76848b-jnpzb_calico-system(151d71d3-c5e5-45ea-8230-624b6527f1e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2411b9465bc740d1c58e2d7e44a8f94220619244d8e312d66bdeb508652a9ba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" podUID="151d71d3-c5e5-45ea-8230-624b6527f1e8" Sep 10 23:44:49.341407 containerd[1523]: time="2025-09-10T23:44:49.341367510Z" level=error msg="Failed to destroy network for sandbox \"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.341798 containerd[1523]: time="2025-09-10T23:44:49.341743307Z" level=error msg="Failed to destroy network for sandbox \"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.342709 containerd[1523]: time="2025-09-10T23:44:49.342367648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zbv8,Uid:ba86c2d5-1ce7-42cf-883e-a99585fe4180,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.342831 kubelet[2640]: E0910 23:44:49.342560 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.342831 kubelet[2640]: E0910 23:44:49.342612 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zbv8" Sep 10 23:44:49.342831 kubelet[2640]: E0910 23:44:49.342629 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zbv8" Sep 10 23:44:49.342916 kubelet[2640]: E0910 23:44:49.342661 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4zbv8_kube-system(ba86c2d5-1ce7-42cf-883e-a99585fe4180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4zbv8_kube-system(ba86c2d5-1ce7-42cf-883e-a99585fe4180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"299e588e3679848c84d3e1b89bf6ea0fdcc75c6606bc9bbcaa5c777cf7711885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4zbv8" podUID="ba86c2d5-1ce7-42cf-883e-a99585fe4180" Sep 10 23:44:49.343897 containerd[1523]: time="2025-09-10T23:44:49.343810708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fbfhs,Uid:198e9461-1076-4ae3-96d7-cb34713b6cd8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.344170 kubelet[2640]: E0910 23:44:49.343974 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.344170 kubelet[2640]: E0910 23:44:49.344013 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fbfhs" Sep 10 23:44:49.344170 kubelet[2640]: E0910 23:44:49.344029 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fbfhs" Sep 10 23:44:49.344251 kubelet[2640]: E0910 23:44:49.344056 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fbfhs_kube-system(198e9461-1076-4ae3-96d7-cb34713b6cd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fbfhs_kube-system(198e9461-1076-4ae3-96d7-cb34713b6cd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"266a0af1a1ec2934b7721a1f78c75e6ea73d65259ebbd52c802a2ef5f3a07b7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fbfhs" podUID="198e9461-1076-4ae3-96d7-cb34713b6cd8" Sep 10 23:44:49.411895 systemd[1]: Created slice kubepods-besteffort-podb009b8ad_18e2_4e3b_8b12_e22cc0b0ccab.slice - libcontainer container kubepods-besteffort-podb009b8ad_18e2_4e3b_8b12_e22cc0b0ccab.slice. Sep 10 23:44:49.414729 containerd[1523]: time="2025-09-10T23:44:49.414689931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnr4r,Uid:b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:49.463849 containerd[1523]: time="2025-09-10T23:44:49.463774551Z" level=error msg="Failed to destroy network for sandbox \"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.465035 containerd[1523]: time="2025-09-10T23:44:49.464987829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnr4r,Uid:b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.465632 kubelet[2640]: E0910 23:44:49.465234 2640 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:44:49.465632 kubelet[2640]: E0910 23:44:49.465302 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:49.465632 kubelet[2640]: E0910 23:44:49.465323 2640 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnr4r" Sep 10 23:44:49.465780 kubelet[2640]: E0910 23:44:49.465373 2640 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qnr4r_calico-system(b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qnr4r_calico-system(b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d298b212be617aa6ce756c8fb52b0c51fd56ecc6994ecc2f94e93439877e76c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qnr4r" podUID="b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab" Sep 10 23:44:49.487743 containerd[1523]: time="2025-09-10T23:44:49.487697601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:44:50.169261 systemd[1]: run-netns-cni\x2dd9a6c6f4\x2d85cd\x2d6a98\x2dc9be\x2d9a4fcb6dfaf7.mount: Deactivated successfully. Sep 10 23:44:50.169354 systemd[1]: run-netns-cni\x2d169d64b7\x2d3a8a\x2d768b\x2def79\x2db5581f551bd7.mount: Deactivated successfully. Sep 10 23:44:50.169400 systemd[1]: run-netns-cni\x2d7a93d070\x2dab1a\x2dd12b\x2d558c\x2d662bd0253a8d.mount: Deactivated successfully. Sep 10 23:44:50.169442 systemd[1]: run-netns-cni\x2d0e6495f9\x2d4b00\x2dbad7\x2d2694\x2dee6742c8f374.mount: Deactivated successfully. Sep 10 23:44:50.169491 systemd[1]: run-netns-cni\x2d5773c1bf\x2d5b52\x2d76b3\x2d747b\x2dce597613c011.mount: Deactivated successfully. Sep 10 23:44:50.169531 systemd[1]: run-netns-cni\x2d607a66a4\x2d214c\x2d9bb5\x2dbf1e\x2dca6f412615c0.mount: Deactivated successfully. Sep 10 23:44:53.042699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282003441.mount: Deactivated successfully. Sep 10 23:44:53.130188 containerd[1523]: time="2025-09-10T23:44:53.130055237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:53.131151 containerd[1523]: time="2025-09-10T23:44:53.130711166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:44:53.131614 containerd[1523]: time="2025-09-10T23:44:53.131521867Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:53.133236 containerd[1523]: time="2025-09-10T23:44:53.133185432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:53.133825 containerd[1523]: time="2025-09-10T23:44:53.133784557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.646029912s" Sep 10 23:44:53.133825 containerd[1523]: time="2025-09-10T23:44:53.133819040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:44:53.157739 containerd[1523]: time="2025-09-10T23:44:53.157671954Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:44:53.201926 containerd[1523]: time="2025-09-10T23:44:53.201839677Z" level=info msg="Container f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:53.223645 containerd[1523]: time="2025-09-10T23:44:53.223572872Z" level=info msg="CreateContainer within sandbox \"074357dd5ab2666059e1d9b905d2ddad4758eff74be7696d0dccb876d033659f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\"" Sep 10 23:44:53.224443 containerd[1523]: time="2025-09-10T23:44:53.224390134Z" level=info msg="StartContainer for \"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\"" Sep 10 23:44:53.229800 containerd[1523]: time="2025-09-10T23:44:53.229742496Z" level=info msg="connecting to shim f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41" address="unix:///run/containerd/s/93ddaef7615c11935eb695eb7e80b96ac30ac6a85ea2ba54bce0f024e6cc66a6" protocol=ttrpc version=3 Sep 10 23:44:53.258833 systemd[1]: Started cri-containerd-f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41.scope - libcontainer container f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41. Sep 10 23:44:53.346174 containerd[1523]: time="2025-09-10T23:44:53.346130892Z" level=info msg="StartContainer for \"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" returns successfully" Sep 10 23:44:53.449811 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:44:53.449911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:44:53.533238 kubelet[2640]: I0910 23:44:53.532706 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7b87c" podStartSLOduration=1.313494077 podStartE2EDuration="12.532690967s" podCreationTimestamp="2025-09-10 23:44:41 +0000 UTC" firstStartedPulling="2025-09-10 23:44:41.915471334 +0000 UTC m=+18.600654430" lastFinishedPulling="2025-09-10 23:44:53.134668224 +0000 UTC m=+29.819851320" observedRunningTime="2025-09-10 23:44:53.53219365 +0000 UTC m=+30.217376746" watchObservedRunningTime="2025-09-10 23:44:53.532690967 +0000 UTC m=+30.217874063" Sep 10 23:44:53.730524 kubelet[2640]: I0910 23:44:53.730312 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfkdn\" (UniqueName: \"kubernetes.io/projected/0a91db49-7831-4a12-8c92-c0e983e8451d-kube-api-access-vfkdn\") pod \"0a91db49-7831-4a12-8c92-c0e983e8451d\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " Sep 10 23:44:53.731358 kubelet[2640]: I0910 23:44:53.730663 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-backend-key-pair\") pod \"0a91db49-7831-4a12-8c92-c0e983e8451d\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " Sep 10 23:44:53.731358 kubelet[2640]: I0910 23:44:53.731209 2640 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-ca-bundle\") pod \"0a91db49-7831-4a12-8c92-c0e983e8451d\" (UID: \"0a91db49-7831-4a12-8c92-c0e983e8451d\") " Sep 10 23:44:53.736033 kubelet[2640]: I0910 23:44:53.735586 2640 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0a91db49-7831-4a12-8c92-c0e983e8451d" (UID: "0a91db49-7831-4a12-8c92-c0e983e8451d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:44:53.741176 kubelet[2640]: I0910 23:44:53.741134 2640 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a91db49-7831-4a12-8c92-c0e983e8451d-kube-api-access-vfkdn" (OuterVolumeSpecName: "kube-api-access-vfkdn") pod "0a91db49-7831-4a12-8c92-c0e983e8451d" (UID: "0a91db49-7831-4a12-8c92-c0e983e8451d"). InnerVolumeSpecName "kube-api-access-vfkdn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:44:53.744101 kubelet[2640]: I0910 23:44:53.744055 2640 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0a91db49-7831-4a12-8c92-c0e983e8451d" (UID: "0a91db49-7831-4a12-8c92-c0e983e8451d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:44:53.832018 kubelet[2640]: I0910 23:44:53.831951 2640 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 23:44:53.832018 kubelet[2640]: I0910 23:44:53.831990 2640 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a91db49-7831-4a12-8c92-c0e983e8451d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 23:44:53.832018 kubelet[2640]: I0910 23:44:53.832000 2640 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfkdn\" (UniqueName: \"kubernetes.io/projected/0a91db49-7831-4a12-8c92-c0e983e8451d-kube-api-access-vfkdn\") on node \"localhost\" DevicePath \"\"" Sep 10 23:44:54.042831 systemd[1]: var-lib-kubelet-pods-0a91db49\x2d7831\x2d4a12\x2d8c92\x2dc0e983e8451d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfkdn.mount: Deactivated successfully. Sep 10 23:44:54.042959 systemd[1]: var-lib-kubelet-pods-0a91db49\x2d7831\x2d4a12\x2d8c92\x2dc0e983e8451d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:44:54.510480 kubelet[2640]: I0910 23:44:54.510427 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:44:54.514404 systemd[1]: Removed slice kubepods-besteffort-pod0a91db49_7831_4a12_8c92_c0e983e8451d.slice - libcontainer container kubepods-besteffort-pod0a91db49_7831_4a12_8c92_c0e983e8451d.slice. Sep 10 23:44:54.584474 systemd[1]: Created slice kubepods-besteffort-podab7bd910_feed_4e8b_a5d9_280b740a7259.slice - libcontainer container kubepods-besteffort-podab7bd910_feed_4e8b_a5d9_280b740a7259.slice. Sep 10 23:44:54.637464 kubelet[2640]: I0910 23:44:54.637398 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd5k9\" (UniqueName: \"kubernetes.io/projected/ab7bd910-feed-4e8b-a5d9-280b740a7259-kube-api-access-wd5k9\") pod \"whisker-698bd8db7c-vjr7j\" (UID: \"ab7bd910-feed-4e8b-a5d9-280b740a7259\") " pod="calico-system/whisker-698bd8db7c-vjr7j" Sep 10 23:44:54.637464 kubelet[2640]: I0910 23:44:54.637455 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab7bd910-feed-4e8b-a5d9-280b740a7259-whisker-backend-key-pair\") pod \"whisker-698bd8db7c-vjr7j\" (UID: \"ab7bd910-feed-4e8b-a5d9-280b740a7259\") " pod="calico-system/whisker-698bd8db7c-vjr7j" Sep 10 23:44:54.637924 kubelet[2640]: I0910 23:44:54.637492 2640 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7bd910-feed-4e8b-a5d9-280b740a7259-whisker-ca-bundle\") pod \"whisker-698bd8db7c-vjr7j\" (UID: \"ab7bd910-feed-4e8b-a5d9-280b740a7259\") " pod="calico-system/whisker-698bd8db7c-vjr7j" Sep 10 23:44:54.652559 containerd[1523]: time="2025-09-10T23:44:54.652503147Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" id:\"e858640665d1b2ce2224dbd851a48ff2e6d319da47e7b2b05f28f598fdc09047\" pid:3792 exit_status:1 exited_at:{seconds:1757547894 nanos:652203766}" Sep 10 23:44:54.728173 containerd[1523]: time="2025-09-10T23:44:54.728133482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" id:\"d80338dba57bccc768d6761b4e4300868cac171e3499627d6a9dfc7bcbebede4\" pid:3816 exit_status:1 exited_at:{seconds:1757547894 nanos:727842141}" Sep 10 23:44:54.892982 containerd[1523]: time="2025-09-10T23:44:54.892919944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698bd8db7c-vjr7j,Uid:ab7bd910-feed-4e8b-a5d9-280b740a7259,Namespace:calico-system,Attempt:0,}" Sep 10 23:44:55.190303 systemd-networkd[1422]: cali1643c8fc4cd: Link UP Sep 10 23:44:55.191404 systemd-networkd[1422]: cali1643c8fc4cd: Gained carrier Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:54.974 [INFO][3930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.041 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--698bd8db7c--vjr7j-eth0 whisker-698bd8db7c- calico-system ab7bd910-feed-4e8b-a5d9-280b740a7259 852 0 2025-09-10 23:44:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:698bd8db7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-698bd8db7c-vjr7j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1643c8fc4cd [] [] }} ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.042 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.120 [INFO][3947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" HandleID="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Workload="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.121 [INFO][3947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" HandleID="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Workload="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-698bd8db7c-vjr7j", "timestamp":"2025-09-10 23:44:55.120876775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.121 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.121 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.121 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.135 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.144 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.151 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.157 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.162 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.162 [INFO][3947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.166 [INFO][3947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146 Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.170 [INFO][3947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.180 [INFO][3947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.180 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" host="localhost" Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.180 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:44:55.208629 containerd[1523]: 2025-09-10 23:44:55.180 [INFO][3947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" HandleID="k8s-pod-network.e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Workload="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.182 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--698bd8db7c--vjr7j-eth0", GenerateName:"whisker-698bd8db7c-", Namespace:"calico-system", SelfLink:"", UID:"ab7bd910-feed-4e8b-a5d9-280b740a7259", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698bd8db7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-698bd8db7c-vjr7j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1643c8fc4cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.183 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.183 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1643c8fc4cd ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.192 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.193 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--698bd8db7c--vjr7j-eth0", GenerateName:"whisker-698bd8db7c-", Namespace:"calico-system", SelfLink:"", UID:"ab7bd910-feed-4e8b-a5d9-280b740a7259", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"698bd8db7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146", Pod:"whisker-698bd8db7c-vjr7j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1643c8fc4cd", MAC:"66:00:cd:96:75:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:44:55.209240 containerd[1523]: 2025-09-10 23:44:55.204 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" Namespace="calico-system" Pod="whisker-698bd8db7c-vjr7j" WorkloadEndpoint="localhost-k8s-whisker--698bd8db7c--vjr7j-eth0" Sep 10 23:44:55.317304 containerd[1523]: time="2025-09-10T23:44:55.317252320Z" level=info msg="connecting to shim e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146" address="unix:///run/containerd/s/62fbd561872fbacff5b9e292b74267acb91d66c9d6a16308d2d4d067819ed556" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:55.358776 systemd[1]: Started cri-containerd-e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146.scope - libcontainer container e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146. Sep 10 23:44:55.371750 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:44:55.395758 containerd[1523]: time="2025-09-10T23:44:55.395722148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698bd8db7c-vjr7j,Uid:ab7bd910-feed-4e8b-a5d9-280b740a7259,Namespace:calico-system,Attempt:0,} returns sandbox id \"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146\"" Sep 10 23:44:55.397767 containerd[1523]: time="2025-09-10T23:44:55.397473144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:44:55.408509 kubelet[2640]: I0910 23:44:55.408470 2640 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a91db49-7831-4a12-8c92-c0e983e8451d" path="/var/lib/kubelet/pods/0a91db49-7831-4a12-8c92-c0e983e8451d/volumes" Sep 10 23:44:55.586366 containerd[1523]: time="2025-09-10T23:44:55.586256586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" id:\"5c6f7cb3b8732827b0e5c905c4d831075137e240994d0e461038ac2bbc74fc8a\" pid:4020 exit_status:1 exited_at:{seconds:1757547895 nanos:585974928}" Sep 10 23:44:56.354749 containerd[1523]: time="2025-09-10T23:44:56.354701736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:56.355814 containerd[1523]: time="2025-09-10T23:44:56.355224688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:44:56.356222 containerd[1523]: time="2025-09-10T23:44:56.356198308Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:56.358341 containerd[1523]: time="2025-09-10T23:44:56.358121067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:56.358795 containerd[1523]: time="2025-09-10T23:44:56.358760307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 961.251521ms" Sep 10 23:44:56.358886 containerd[1523]: time="2025-09-10T23:44:56.358870954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:44:56.360698 containerd[1523]: time="2025-09-10T23:44:56.360664585Z" level=info msg="CreateContainer within sandbox \"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:44:56.367280 containerd[1523]: time="2025-09-10T23:44:56.366836648Z" level=info msg="Container 0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:56.373704 systemd-networkd[1422]: cali1643c8fc4cd: Gained IPv6LL Sep 10 23:44:56.374260 containerd[1523]: time="2025-09-10T23:44:56.374218785Z" level=info msg="CreateContainer within sandbox \"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e\"" Sep 10 23:44:56.374794 containerd[1523]: time="2025-09-10T23:44:56.374767699Z" level=info msg="StartContainer for \"0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e\"" Sep 10 23:44:56.376201 containerd[1523]: time="2025-09-10T23:44:56.376134544Z" level=info msg="connecting to shim 0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e" address="unix:///run/containerd/s/62fbd561872fbacff5b9e292b74267acb91d66c9d6a16308d2d4d067819ed556" protocol=ttrpc version=3 Sep 10 23:44:56.396860 systemd[1]: Started cri-containerd-0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e.scope - libcontainer container 0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e. Sep 10 23:44:56.429842 containerd[1523]: time="2025-09-10T23:44:56.429804591Z" level=info msg="StartContainer for \"0c88179ce335521da1c502258741da5904a163d0016b2b2b6d5aa3a46853b82e\" returns successfully" Sep 10 23:44:56.432695 containerd[1523]: time="2025-09-10T23:44:56.431050108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:44:57.697809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413768254.mount: Deactivated successfully. Sep 10 23:44:57.711940 containerd[1523]: time="2025-09-10T23:44:57.711869550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:57.713047 containerd[1523]: time="2025-09-10T23:44:57.712911731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:44:57.713861 containerd[1523]: time="2025-09-10T23:44:57.713826424Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:57.716238 containerd[1523]: time="2025-09-10T23:44:57.716206042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:44:57.716746 containerd[1523]: time="2025-09-10T23:44:57.716713392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.285626241s" Sep 10 23:44:57.716799 containerd[1523]: time="2025-09-10T23:44:57.716746394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:44:57.721061 containerd[1523]: time="2025-09-10T23:44:57.720537454Z" level=info msg="CreateContainer within sandbox \"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:44:57.733635 containerd[1523]: time="2025-09-10T23:44:57.733396961Z" level=info msg="Container c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:44:57.744258 containerd[1523]: time="2025-09-10T23:44:57.744195549Z" level=info msg="CreateContainer within sandbox \"e26929d554dab15b3e031b76b2829350dc2e85e672df855f418f49bd6e598146\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb\"" Sep 10 23:44:57.744917 containerd[1523]: time="2025-09-10T23:44:57.744809705Z" level=info msg="StartContainer for \"c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb\"" Sep 10 23:44:57.746326 containerd[1523]: time="2025-09-10T23:44:57.746297751Z" level=info msg="connecting to shim c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb" address="unix:///run/containerd/s/62fbd561872fbacff5b9e292b74267acb91d66c9d6a16308d2d4d067819ed556" protocol=ttrpc version=3 Sep 10 23:44:57.773803 systemd[1]: Started cri-containerd-c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb.scope - libcontainer container c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb. Sep 10 23:44:57.815482 containerd[1523]: time="2025-09-10T23:44:57.815439969Z" level=info msg="StartContainer for \"c7572165d15071884397cbc9edda927e994690332a14bea0396f056fbb3c5cbb\" returns successfully" Sep 10 23:44:58.535323 kubelet[2640]: I0910 23:44:58.535259 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-698bd8db7c-vjr7j" podStartSLOduration=2.214407273 podStartE2EDuration="4.535215058s" podCreationTimestamp="2025-09-10 23:44:54 +0000 UTC" firstStartedPulling="2025-09-10 23:44:55.396847822 +0000 UTC m=+32.082030918" lastFinishedPulling="2025-09-10 23:44:57.717655607 +0000 UTC m=+34.402838703" observedRunningTime="2025-09-10 23:44:58.533171387 +0000 UTC m=+35.218354483" watchObservedRunningTime="2025-09-10 23:44:58.535215058 +0000 UTC m=+35.220398194" Sep 10 23:44:59.407098 containerd[1523]: time="2025-09-10T23:44:59.407055368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-vh2hg,Uid:91562592-1b1f-4278-b6a6-71c79990837e,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:44:59.543918 systemd-networkd[1422]: calib1d1aa639aa: Link UP Sep 10 23:44:59.545533 systemd-networkd[1422]: calib1d1aa639aa: Gained carrier Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.460 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.474 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0 calico-apiserver-79fbf74b84- calico-apiserver 91562592-1b1f-4278-b6a6-71c79990837e 779 0 2025-09-10 23:44:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79fbf74b84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79fbf74b84-vh2hg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib1d1aa639aa [] [] }} ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.474 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.502 [INFO][4229] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" HandleID="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.502 [INFO][4229] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" HandleID="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79fbf74b84-vh2hg", "timestamp":"2025-09-10 23:44:59.502678514 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.503 [INFO][4229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.503 [INFO][4229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.503 [INFO][4229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.511 [INFO][4229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.516 [INFO][4229] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.521 [INFO][4229] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.524 [INFO][4229] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.527 [INFO][4229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.527 [INFO][4229] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.529 [INFO][4229] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.533 [INFO][4229] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.539 [INFO][4229] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.539 [INFO][4229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" host="localhost" Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.539 [INFO][4229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:44:59.557382 containerd[1523]: 2025-09-10 23:44:59.539 [INFO][4229] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" HandleID="k8s-pod-network.77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.541 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0", GenerateName:"calico-apiserver-79fbf74b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"91562592-1b1f-4278-b6a6-71c79990837e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79fbf74b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79fbf74b84-vh2hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1d1aa639aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.542 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.542 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1d1aa639aa ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.543 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.544 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0", GenerateName:"calico-apiserver-79fbf74b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"91562592-1b1f-4278-b6a6-71c79990837e", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79fbf74b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b", Pod:"calico-apiserver-79fbf74b84-vh2hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib1d1aa639aa", MAC:"a6:7e:53:8c:aa:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:44:59.558148 containerd[1523]: 2025-09-10 23:44:59.553 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-vh2hg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--vh2hg-eth0" Sep 10 23:44:59.576959 containerd[1523]: time="2025-09-10T23:44:59.576822969Z" level=info msg="connecting to shim 77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b" address="unix:///run/containerd/s/f3720ffedfb42fa1d307f79ee68e5687a5cffda4821c83ebfd03943877e17d5d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:44:59.603789 systemd[1]: Started cri-containerd-77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b.scope - libcontainer container 77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b. Sep 10 23:44:59.615537 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:44:59.637637 containerd[1523]: time="2025-09-10T23:44:59.637589283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-vh2hg,Uid:91562592-1b1f-4278-b6a6-71c79990837e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b\"" Sep 10 23:44:59.639390 containerd[1523]: time="2025-09-10T23:44:59.639347440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:45:00.406261 containerd[1523]: time="2025-09-10T23:45:00.406209282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fbfhs,Uid:198e9461-1076-4ae3-96d7-cb34713b6cd8,Namespace:kube-system,Attempt:0,}" Sep 10 23:45:00.557035 systemd-networkd[1422]: calif77651d3002: Link UP Sep 10 23:45:00.559836 systemd-networkd[1422]: calif77651d3002: Gained carrier Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.449 [INFO][4314] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.465 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0 coredns-668d6bf9bc- kube-system 198e9461-1076-4ae3-96d7-cb34713b6cd8 784 0 2025-09-10 23:44:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fbfhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif77651d3002 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.465 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.494 [INFO][4328] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" HandleID="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Workload="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.494 [INFO][4328] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" HandleID="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Workload="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fbfhs", "timestamp":"2025-09-10 23:45:00.494226952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.494 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.494 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.494 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.506 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.516 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.523 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.526 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.529 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.529 [INFO][4328] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.531 [INFO][4328] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059 Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.538 [INFO][4328] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.549 [INFO][4328] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.549 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" host="localhost" Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.549 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:00.583132 containerd[1523]: 2025-09-10 23:45:00.549 [INFO][4328] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" HandleID="k8s-pod-network.bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Workload="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583902 containerd[1523]: 2025-09-10 23:45:00.553 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"198e9461-1076-4ae3-96d7-cb34713b6cd8", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fbfhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif77651d3002", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:00.583902 containerd[1523]: 2025-09-10 23:45:00.553 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583902 containerd[1523]: 2025-09-10 23:45:00.553 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif77651d3002 ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583902 containerd[1523]: 2025-09-10 23:45:00.561 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.583902 containerd[1523]: 2025-09-10 23:45:00.565 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"198e9461-1076-4ae3-96d7-cb34713b6cd8", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059", Pod:"coredns-668d6bf9bc-fbfhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif77651d3002", MAC:"36:b6:a1:11:be:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:00.584111 containerd[1523]: 2025-09-10 23:45:00.577 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" Namespace="kube-system" Pod="coredns-668d6bf9bc-fbfhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fbfhs-eth0" Sep 10 23:45:00.611511 containerd[1523]: time="2025-09-10T23:45:00.611469294Z" level=info msg="connecting to shim bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059" address="unix:///run/containerd/s/ea46b3c19894498c8906f7514b8c116e8e2ef1b51ce955d3d6a8798b814ae4ff" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:00.657841 systemd[1]: Started cri-containerd-bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059.scope - libcontainer container bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059. Sep 10 23:45:00.674916 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:00.701156 containerd[1523]: time="2025-09-10T23:45:00.701111033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fbfhs,Uid:198e9461-1076-4ae3-96d7-cb34713b6cd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059\"" Sep 10 23:45:00.705314 containerd[1523]: time="2025-09-10T23:45:00.705254967Z" level=info msg="CreateContainer within sandbox \"bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:45:00.720747 containerd[1523]: time="2025-09-10T23:45:00.720687578Z" level=info msg="Container 3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:00.721877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917006921.mount: Deactivated successfully. Sep 10 23:45:00.741467 containerd[1523]: time="2025-09-10T23:45:00.741386011Z" level=info msg="CreateContainer within sandbox \"bfa35d3783f4104938983a7eeb643063d85debe731d67232f49be9db63955059\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922\"" Sep 10 23:45:00.742238 containerd[1523]: time="2025-09-10T23:45:00.742149723Z" level=info msg="StartContainer for \"3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922\"" Sep 10 23:45:00.743154 containerd[1523]: time="2025-09-10T23:45:00.743032800Z" level=info msg="connecting to shim 3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922" address="unix:///run/containerd/s/ea46b3c19894498c8906f7514b8c116e8e2ef1b51ce955d3d6a8798b814ae4ff" protocol=ttrpc version=3 Sep 10 23:45:00.773821 systemd[1]: Started cri-containerd-3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922.scope - libcontainer container 3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922. Sep 10 23:45:00.812377 containerd[1523]: time="2025-09-10T23:45:00.812330361Z" level=info msg="StartContainer for \"3f6216b0ac0a92475ef66de576aa38a371b4fcca4d9569ff6b5a95d1ab11c922\" returns successfully" Sep 10 23:45:00.981924 systemd-networkd[1422]: calib1d1aa639aa: Gained IPv6LL Sep 10 23:45:01.407755 containerd[1523]: time="2025-09-10T23:45:01.407262085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-8vgwp,Uid:17b7ffaf-afae-4058-99ef-1152edfa4efa,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:45:01.407755 containerd[1523]: time="2025-09-10T23:45:01.407261845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnr4r,Uid:b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab,Namespace:calico-system,Attempt:0,}" Sep 10 23:45:01.579581 kubelet[2640]: I0910 23:45:01.579349 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fbfhs" podStartSLOduration=31.579329257 podStartE2EDuration="31.579329257s" podCreationTimestamp="2025-09-10 23:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:45:01.577892159 +0000 UTC m=+38.263075255" watchObservedRunningTime="2025-09-10 23:45:01.579329257 +0000 UTC m=+38.264512353" Sep 10 23:45:01.635394 containerd[1523]: time="2025-09-10T23:45:01.635283711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:01.637807 containerd[1523]: time="2025-09-10T23:45:01.637783013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:45:01.640073 containerd[1523]: time="2025-09-10T23:45:01.640049506Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:01.648778 containerd[1523]: time="2025-09-10T23:45:01.648744103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:01.650651 containerd[1523]: time="2025-09-10T23:45:01.650573218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.011195056s" Sep 10 23:45:01.650786 containerd[1523]: time="2025-09-10T23:45:01.650769746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:45:01.656865 containerd[1523]: time="2025-09-10T23:45:01.656838634Z" level=info msg="CreateContainer within sandbox \"77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:45:01.673799 containerd[1523]: time="2025-09-10T23:45:01.673697285Z" level=info msg="Container 446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:01.696431 containerd[1523]: time="2025-09-10T23:45:01.696375375Z" level=info msg="CreateContainer within sandbox \"77be760e9739d550b3fa4b473f4c0f210d9b60adc3984c43def271857eb63d2b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae\"" Sep 10 23:45:01.698067 containerd[1523]: time="2025-09-10T23:45:01.697877836Z" level=info msg="StartContainer for \"446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae\"" Sep 10 23:45:01.699897 containerd[1523]: time="2025-09-10T23:45:01.699370178Z" level=info msg="connecting to shim 446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae" address="unix:///run/containerd/s/f3720ffedfb42fa1d307f79ee68e5687a5cffda4821c83ebfd03943877e17d5d" protocol=ttrpc version=3 Sep 10 23:45:01.719790 systemd[1]: Started cri-containerd-446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae.scope - libcontainer container 446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae. Sep 10 23:45:01.741939 systemd-networkd[1422]: cali55e57883b1d: Link UP Sep 10 23:45:01.742334 systemd-networkd[1422]: cali55e57883b1d: Gained carrier Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.627 [INFO][4462] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.644 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0 calico-apiserver-79fbf74b84- calico-apiserver 17b7ffaf-afae-4058-99ef-1152edfa4efa 782 0 2025-09-10 23:44:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79fbf74b84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79fbf74b84-8vgwp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55e57883b1d [] [] }} ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.644 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.678 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" HandleID="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.678 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" HandleID="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79fbf74b84-8vgwp", "timestamp":"2025-09-10 23:45:01.678620207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.678 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.678 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.678 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.691 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.699 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.712 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.715 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.718 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.718 [INFO][4499] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.721 [INFO][4499] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.726 [INFO][4499] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4499] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" host="localhost" Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:01.761289 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" HandleID="k8s-pod-network.6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Workload="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.737 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0", GenerateName:"calico-apiserver-79fbf74b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b7ffaf-afae-4058-99ef-1152edfa4efa", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79fbf74b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79fbf74b84-8vgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55e57883b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.738 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.738 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55e57883b1d ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.743 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.745 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0", GenerateName:"calico-apiserver-79fbf74b84-", Namespace:"calico-apiserver", SelfLink:"", UID:"17b7ffaf-afae-4058-99ef-1152edfa4efa", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79fbf74b84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b", Pod:"calico-apiserver-79fbf74b84-8vgwp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55e57883b1d", MAC:"a2:1b:46:b1:3c:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:01.761828 containerd[1523]: 2025-09-10 23:45:01.758 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" Namespace="calico-apiserver" Pod="calico-apiserver-79fbf74b84-8vgwp" WorkloadEndpoint="localhost-k8s-calico--apiserver--79fbf74b84--8vgwp-eth0" Sep 10 23:45:01.766670 containerd[1523]: time="2025-09-10T23:45:01.766635615Z" level=info msg="StartContainer for \"446a193b48e306f279465362facf1aca58a2f44e014f46cf322f7a9217cf72ae\" returns successfully" Sep 10 23:45:01.784159 containerd[1523]: time="2025-09-10T23:45:01.784109931Z" level=info msg="connecting to shim 6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b" address="unix:///run/containerd/s/31b9da635225e57c8f269b6fcbe886c167fc60797cbce0de89d772135ff14b4d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:01.806900 systemd[1]: Started cri-containerd-6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b.scope - libcontainer container 6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b. Sep 10 23:45:01.823615 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:01.849835 systemd-networkd[1422]: calif3a752d310d: Link UP Sep 10 23:45:01.850564 systemd-networkd[1422]: calif3a752d310d: Gained carrier Sep 10 23:45:01.856512 containerd[1523]: time="2025-09-10T23:45:01.856476217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79fbf74b84-8vgwp,Uid:17b7ffaf-afae-4058-99ef-1152edfa4efa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b\"" Sep 10 23:45:01.864673 containerd[1523]: time="2025-09-10T23:45:01.864562788Z" level=info msg="CreateContainer within sandbox \"6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.638 [INFO][4467] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.657 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qnr4r-eth0 csi-node-driver- calico-system b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab 643 0 2025-09-10 23:44:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qnr4r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3a752d310d [] [] }} ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.657 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.688 [INFO][4506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" HandleID="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Workload="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.688 [INFO][4506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" HandleID="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Workload="localhost-k8s-csi--node--driver--qnr4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qnr4r", "timestamp":"2025-09-10 23:45:01.688121837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.688 [INFO][4506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.733 [INFO][4506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.793 [INFO][4506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.802 [INFO][4506] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.818 [INFO][4506] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.822 [INFO][4506] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.826 [INFO][4506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.826 [INFO][4506] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.829 [INFO][4506] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4 Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.833 [INFO][4506] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.842 [INFO][4506] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.842 [INFO][4506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" host="localhost" Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.842 [INFO][4506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:01.865355 containerd[1523]: 2025-09-10 23:45:01.842 [INFO][4506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" HandleID="k8s-pod-network.b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Workload="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.847 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnr4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qnr4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3a752d310d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.848 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.848 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3a752d310d ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.850 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.850 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qnr4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab", ResourceVersion:"643", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4", Pod:"csi-node-driver-qnr4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3a752d310d", MAC:"56:f2:cf:11:03:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:01.866045 containerd[1523]: 2025-09-10 23:45:01.861 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" Namespace="calico-system" Pod="csi-node-driver-qnr4r" WorkloadEndpoint="localhost-k8s-csi--node--driver--qnr4r-eth0" Sep 10 23:45:01.889751 containerd[1523]: time="2025-09-10T23:45:01.889703179Z" level=info msg="connecting to shim b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4" address="unix:///run/containerd/s/97970a5b55ce01d96de23fcfea4003971aed7cdd46ca1e8f11a2f580200d99f1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:01.899385 containerd[1523]: time="2025-09-10T23:45:01.899342694Z" level=info msg="Container 9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:01.912805 systemd[1]: Started cri-containerd-b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4.scope - libcontainer container b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4. Sep 10 23:45:01.925500 containerd[1523]: time="2025-09-10T23:45:01.925280997Z" level=info msg="CreateContainer within sandbox \"6cc1c1374f36bbec617bf2d2bc7809b109d820bbe88a8740128fa91eb2fad42b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71\"" Sep 10 23:45:01.927933 containerd[1523]: time="2025-09-10T23:45:01.927786660Z" level=info msg="StartContainer for \"9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71\"" Sep 10 23:45:01.929054 containerd[1523]: time="2025-09-10T23:45:01.929023510Z" level=info msg="connecting to shim 9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71" address="unix:///run/containerd/s/31b9da635225e57c8f269b6fcbe886c167fc60797cbce0de89d772135ff14b4d" protocol=ttrpc version=3 Sep 10 23:45:01.929432 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:01.950021 containerd[1523]: time="2025-09-10T23:45:01.949972929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnr4r,Uid:b009b8ad-18e2-4e3b-8b12-e22cc0b0ccab,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4\"" Sep 10 23:45:01.951456 containerd[1523]: time="2025-09-10T23:45:01.951422188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:45:01.952819 systemd[1]: Started cri-containerd-9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71.scope - libcontainer container 9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71. Sep 10 23:45:01.991332 containerd[1523]: time="2025-09-10T23:45:01.991298063Z" level=info msg="StartContainer for \"9f7ca448fcce8c8cd446adfaa24211ba78e6c589a96aa57928b1edf28d44ff71\" returns successfully" Sep 10 23:45:02.133730 systemd-networkd[1422]: calif77651d3002: Gained IPv6LL Sep 10 23:45:02.406223 containerd[1523]: time="2025-09-10T23:45:02.406184409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zbv8,Uid:ba86c2d5-1ce7-42cf-883e-a99585fe4180,Namespace:kube-system,Attempt:0,}" Sep 10 23:45:02.531242 systemd-networkd[1422]: calif1af21df7a6: Link UP Sep 10 23:45:02.532960 systemd-networkd[1422]: calif1af21df7a6: Gained carrier Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.428 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.443 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0 coredns-668d6bf9bc- kube-system ba86c2d5-1ce7-42cf-883e-a99585fe4180 785 0 2025-09-10 23:44:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4zbv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1af21df7a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.443 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.475 [INFO][4708] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" HandleID="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Workload="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.479 [INFO][4708] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" HandleID="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Workload="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000487ec0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4zbv8", "timestamp":"2025-09-10 23:45:02.47546101 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.479 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.479 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.479 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.499 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.504 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.509 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.511 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.513 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.513 [INFO][4708] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.515 [INFO][4708] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.519 [INFO][4708] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.524 [INFO][4708] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.524 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" host="localhost" Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.524 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:02.558177 containerd[1523]: 2025-09-10 23:45:02.524 [INFO][4708] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" HandleID="k8s-pod-network.a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Workload="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558983 containerd[1523]: 2025-09-10 23:45:02.526 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba86c2d5-1ce7-42cf-883e-a99585fe4180", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4zbv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1af21df7a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:02.558983 containerd[1523]: 2025-09-10 23:45:02.526 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558983 containerd[1523]: 2025-09-10 23:45:02.527 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1af21df7a6 ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558983 containerd[1523]: 2025-09-10 23:45:02.533 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.558983 containerd[1523]: 2025-09-10 23:45:02.534 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba86c2d5-1ce7-42cf-883e-a99585fe4180", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d", Pod:"coredns-668d6bf9bc-4zbv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1af21df7a6", MAC:"ea:30:35:9e:4e:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:02.559150 containerd[1523]: 2025-09-10 23:45:02.549 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zbv8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4zbv8-eth0" Sep 10 23:45:02.593962 containerd[1523]: time="2025-09-10T23:45:02.593903171Z" level=info msg="connecting to shim a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d" address="unix:///run/containerd/s/0a54865d30e9355032a70e3d4d71d8a81a27f525a8a5c5bf0628bc2a6036ada7" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:02.595411 kubelet[2640]: I0910 23:45:02.595354 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79fbf74b84-8vgwp" podStartSLOduration=23.59513706 podStartE2EDuration="23.59513706s" podCreationTimestamp="2025-09-10 23:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:45:02.592149021 +0000 UTC m=+39.277332117" watchObservedRunningTime="2025-09-10 23:45:02.59513706 +0000 UTC m=+39.280320116" Sep 10 23:45:02.596624 kubelet[2640]: I0910 23:45:02.596547 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79fbf74b84-vh2hg" podStartSLOduration=21.583373578 podStartE2EDuration="23.596534556s" podCreationTimestamp="2025-09-10 23:44:39 +0000 UTC" firstStartedPulling="2025-09-10 23:44:59.639003545 +0000 UTC m=+36.324186641" lastFinishedPulling="2025-09-10 23:45:01.652164523 +0000 UTC m=+38.337347619" observedRunningTime="2025-09-10 23:45:02.563175386 +0000 UTC m=+39.248358482" watchObservedRunningTime="2025-09-10 23:45:02.596534556 +0000 UTC m=+39.281717652" Sep 10 23:45:02.631800 systemd[1]: Started cri-containerd-a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d.scope - libcontainer container a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d. Sep 10 23:45:02.650057 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:02.687184 containerd[1523]: time="2025-09-10T23:45:02.687053124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zbv8,Uid:ba86c2d5-1ce7-42cf-883e-a99585fe4180,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d\"" Sep 10 23:45:02.691052 containerd[1523]: time="2025-09-10T23:45:02.691012801Z" level=info msg="CreateContainer within sandbox \"a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:45:02.760081 containerd[1523]: time="2025-09-10T23:45:02.760036232Z" level=info msg="Container a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:02.774708 containerd[1523]: time="2025-09-10T23:45:02.774662375Z" level=info msg="CreateContainer within sandbox \"a6d51b3ae2216cda3873c710ffb5efa16651372c7847fa7ad11a981b35ef551d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42\"" Sep 10 23:45:02.776817 containerd[1523]: time="2025-09-10T23:45:02.776751979Z" level=info msg="StartContainer for \"a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42\"" Sep 10 23:45:02.778369 containerd[1523]: time="2025-09-10T23:45:02.778313201Z" level=info msg="connecting to shim a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42" address="unix:///run/containerd/s/0a54865d30e9355032a70e3d4d71d8a81a27f525a8a5c5bf0628bc2a6036ada7" protocol=ttrpc version=3 Sep 10 23:45:02.802797 systemd[1]: Started cri-containerd-a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42.scope - libcontainer container a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42. Sep 10 23:45:02.839726 containerd[1523]: time="2025-09-10T23:45:02.839680367Z" level=info msg="StartContainer for \"a797095f467f5679d7318187177670e5e100edeb6862e2c408dd0445a8acbe42\" returns successfully" Sep 10 23:45:03.159673 systemd-networkd[1422]: cali55e57883b1d: Gained IPv6LL Sep 10 23:45:03.323494 containerd[1523]: time="2025-09-10T23:45:03.322865032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:03.327643 containerd[1523]: time="2025-09-10T23:45:03.327611056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:45:03.328417 containerd[1523]: time="2025-09-10T23:45:03.328393446Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:03.330792 containerd[1523]: time="2025-09-10T23:45:03.330760298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:03.331984 containerd[1523]: time="2025-09-10T23:45:03.331959944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.380505275s" Sep 10 23:45:03.332089 containerd[1523]: time="2025-09-10T23:45:03.332075109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:45:03.334176 containerd[1523]: time="2025-09-10T23:45:03.334147949Z" level=info msg="CreateContainer within sandbox \"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:45:03.343744 containerd[1523]: time="2025-09-10T23:45:03.343712240Z" level=info msg="Container 462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:03.350875 containerd[1523]: time="2025-09-10T23:45:03.350830236Z" level=info msg="CreateContainer within sandbox \"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f\"" Sep 10 23:45:03.353031 containerd[1523]: time="2025-09-10T23:45:03.351691029Z" level=info msg="StartContainer for \"462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f\"" Sep 10 23:45:03.353178 containerd[1523]: time="2025-09-10T23:45:03.353154886Z" level=info msg="connecting to shim 462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f" address="unix:///run/containerd/s/97970a5b55ce01d96de23fcfea4003971aed7cdd46ca1e8f11a2f580200d99f1" protocol=ttrpc version=3 Sep 10 23:45:03.369769 systemd[1]: Started cri-containerd-462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f.scope - libcontainer container 462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f. Sep 10 23:45:03.413740 systemd-networkd[1422]: calif3a752d310d: Gained IPv6LL Sep 10 23:45:03.415416 containerd[1523]: time="2025-09-10T23:45:03.415366697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c76848b-jnpzb,Uid:151d71d3-c5e5-45ea-8230-624b6527f1e8,Namespace:calico-system,Attempt:0,}" Sep 10 23:45:03.492246 containerd[1523]: time="2025-09-10T23:45:03.492206116Z" level=info msg="StartContainer for \"462635f27c5e4d11e8c96f39a2eb6c26c6ca5c7cdff6bcb171b37797b7ec236f\" returns successfully" Sep 10 23:45:03.493782 containerd[1523]: time="2025-09-10T23:45:03.493728135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:45:03.571657 systemd-networkd[1422]: cali4d7bb2bbf41: Link UP Sep 10 23:45:03.573462 systemd-networkd[1422]: cali4d7bb2bbf41: Gained carrier Sep 10 23:45:03.589416 kubelet[2640]: I0910 23:45:03.588873 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.475 [INFO][4857] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.494 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0 calico-kube-controllers-766c76848b- calico-system 151d71d3-c5e5-45ea-8230-624b6527f1e8 781 0 2025-09-10 23:44:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:766c76848b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-766c76848b-jnpzb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4d7bb2bbf41 [] [] }} ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.494 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.520 [INFO][4883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" HandleID="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Workload="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.520 [INFO][4883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" HandleID="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Workload="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-766c76848b-jnpzb", "timestamp":"2025-09-10 23:45:03.520453251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.520 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.520 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.520 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.531 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.538 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.542 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.545 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.548 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.548 [INFO][4883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.550 [INFO][4883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.556 [INFO][4883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.565 [INFO][4883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.565 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" host="localhost" Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.565 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:03.606189 containerd[1523]: 2025-09-10 23:45:03.565 [INFO][4883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" HandleID="k8s-pod-network.564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Workload="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.567 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0", GenerateName:"calico-kube-controllers-766c76848b-", Namespace:"calico-system", SelfLink:"", UID:"151d71d3-c5e5-45ea-8230-624b6527f1e8", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766c76848b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-766c76848b-jnpzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d7bb2bbf41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.568 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.568 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d7bb2bbf41 ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.574 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.575 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0", GenerateName:"calico-kube-controllers-766c76848b-", Namespace:"calico-system", SelfLink:"", UID:"151d71d3-c5e5-45ea-8230-624b6527f1e8", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766c76848b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e", Pod:"calico-kube-controllers-766c76848b-jnpzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d7bb2bbf41", MAC:"2e:f1:c7:39:0f:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:03.606733 containerd[1523]: 2025-09-10 23:45:03.595 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" Namespace="calico-system" Pod="calico-kube-controllers-766c76848b-jnpzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766c76848b--jnpzb-eth0" Sep 10 23:45:03.624913 kubelet[2640]: I0910 23:45:03.624825 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4zbv8" podStartSLOduration=33.624805215 podStartE2EDuration="33.624805215s" podCreationTimestamp="2025-09-10 23:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:45:03.623968063 +0000 UTC m=+40.309151159" watchObservedRunningTime="2025-09-10 23:45:03.624805215 +0000 UTC m=+40.309988311" Sep 10 23:45:03.656882 containerd[1523]: time="2025-09-10T23:45:03.656791455Z" level=info msg="connecting to shim 564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e" address="unix:///run/containerd/s/090abfdb33f5f181738377171c5aa2ce143d1fcc14ef4fe0ad21b398c59f28a7" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:03.689846 systemd[1]: Started cri-containerd-564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e.scope - libcontainer container 564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e. Sep 10 23:45:03.703691 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:03.731889 containerd[1523]: time="2025-09-10T23:45:03.731654397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766c76848b-jnpzb,Uid:151d71d3-c5e5-45ea-8230-624b6527f1e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e\"" Sep 10 23:45:04.181764 systemd-networkd[1422]: calif1af21df7a6: Gained IPv6LL Sep 10 23:45:04.406972 containerd[1523]: time="2025-09-10T23:45:04.406934301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g4xj7,Uid:6343e5ee-80d9-43c3-abc2-8e7a580d6e68,Namespace:calico-system,Attempt:0,}" Sep 10 23:45:04.509631 systemd-networkd[1422]: cali65a92ab07a3: Link UP Sep 10 23:45:04.510078 systemd-networkd[1422]: cali65a92ab07a3: Gained carrier Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.427 [INFO][4972] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.439 [INFO][4972] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--g4xj7-eth0 goldmane-54d579b49d- calico-system 6343e5ee-80d9-43c3-abc2-8e7a580d6e68 783 0 2025-09-10 23:44:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-g4xj7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali65a92ab07a3 [] [] }} ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.439 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.463 [INFO][4987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" HandleID="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Workload="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.463 [INFO][4987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" HandleID="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Workload="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-g4xj7", "timestamp":"2025-09-10 23:45:04.463119579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.463 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.463 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.463 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.472 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.476 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.480 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.482 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.484 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.484 [INFO][4987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.487 [INFO][4987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4 Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.494 [INFO][4987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.504 [INFO][4987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.504 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" host="localhost" Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.504 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:45:04.525228 containerd[1523]: 2025-09-10 23:45:04.504 [INFO][4987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" HandleID="k8s-pod-network.b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Workload="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.507 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--g4xj7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6343e5ee-80d9-43c3-abc2-8e7a580d6e68", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-g4xj7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65a92ab07a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.507 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.507 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65a92ab07a3 ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.509 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.510 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--g4xj7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6343e5ee-80d9-43c3-abc2-8e7a580d6e68", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4", Pod:"goldmane-54d579b49d-g4xj7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali65a92ab07a3", MAC:"f6:7b:c3:cd:04:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:45:04.525856 containerd[1523]: 2025-09-10 23:45:04.521 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" Namespace="calico-system" Pod="goldmane-54d579b49d-g4xj7" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--g4xj7-eth0" Sep 10 23:45:04.553862 containerd[1523]: time="2025-09-10T23:45:04.553819399Z" level=info msg="connecting to shim b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4" address="unix:///run/containerd/s/2e69fb1af95916901cfffdedc289df1d9793dabbf0c6b2c3e3bae75506a73d95" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:45:04.580810 systemd[1]: Started cri-containerd-b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4.scope - libcontainer container b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4. Sep 10 23:45:04.594535 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:45:04.628339 containerd[1523]: time="2025-09-10T23:45:04.628264685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-g4xj7,Uid:6343e5ee-80d9-43c3-abc2-8e7a580d6e68,Namespace:calico-system,Attempt:0,} returns sandbox id \"b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4\"" Sep 10 23:45:04.751960 containerd[1523]: time="2025-09-10T23:45:04.751705659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:04.753335 containerd[1523]: time="2025-09-10T23:45:04.753292599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:45:04.754248 containerd[1523]: time="2025-09-10T23:45:04.754213794Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:04.758696 containerd[1523]: time="2025-09-10T23:45:04.758647361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:04.759922 containerd[1523]: time="2025-09-10T23:45:04.759893208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.266132671s" Sep 10 23:45:04.759968 containerd[1523]: time="2025-09-10T23:45:04.759927289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:45:04.761482 containerd[1523]: time="2025-09-10T23:45:04.760879365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:45:04.762718 containerd[1523]: time="2025-09-10T23:45:04.762685873Z" level=info msg="CreateContainer within sandbox \"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:45:04.829818 containerd[1523]: time="2025-09-10T23:45:04.829772242Z" level=info msg="Container 4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:04.840103 containerd[1523]: time="2025-09-10T23:45:04.840058190Z" level=info msg="CreateContainer within sandbox \"b0f320db10f441b6a585f8e000118ff76d9211d9f5c09af9853cc9f0464319c4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8\"" Sep 10 23:45:04.840530 containerd[1523]: time="2025-09-10T23:45:04.840510247Z" level=info msg="StartContainer for \"4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8\"" Sep 10 23:45:04.844104 containerd[1523]: time="2025-09-10T23:45:04.844066941Z" level=info msg="connecting to shim 4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8" address="unix:///run/containerd/s/97970a5b55ce01d96de23fcfea4003971aed7cdd46ca1e8f11a2f580200d99f1" protocol=ttrpc version=3 Sep 10 23:45:04.869862 systemd[1]: Started cri-containerd-4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8.scope - libcontainer container 4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8. Sep 10 23:45:04.927552 containerd[1523]: time="2025-09-10T23:45:04.927486446Z" level=info msg="StartContainer for \"4f60d08746d373f9b8bdaf12f135c970e7d7123630bf051237038e1bcd6db4d8\" returns successfully" Sep 10 23:45:05.060841 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:50410.service - OpenSSH per-connection server daemon (10.0.0.1:50410). Sep 10 23:45:05.074039 kubelet[2640]: I0910 23:45:05.073990 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:45:05.138965 sshd[5113]: Accepted publickey for core from 10.0.0.1 port 50410 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:05.140777 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:05.144932 systemd-logind[1495]: New session 8 of user core. Sep 10 23:45:05.155818 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:45:05.269768 systemd-networkd[1422]: cali4d7bb2bbf41: Gained IPv6LL Sep 10 23:45:05.415220 sshd[5115]: Connection closed by 10.0.0.1 port 50410 Sep 10 23:45:05.415528 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:05.419082 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:50410.service: Deactivated successfully. Sep 10 23:45:05.420985 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:45:05.423825 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:45:05.424924 systemd-logind[1495]: Removed session 8. Sep 10 23:45:05.481634 kubelet[2640]: I0910 23:45:05.481570 2640 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:45:05.481771 kubelet[2640]: I0910 23:45:05.481651 2640 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:45:05.562676 systemd-networkd[1422]: vxlan.calico: Link UP Sep 10 23:45:05.562684 systemd-networkd[1422]: vxlan.calico: Gained carrier Sep 10 23:45:05.691974 kubelet[2640]: I0910 23:45:05.691654 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qnr4r" podStartSLOduration=21.882033282 podStartE2EDuration="24.691634065s" podCreationTimestamp="2025-09-10 23:44:41 +0000 UTC" firstStartedPulling="2025-09-10 23:45:01.951091415 +0000 UTC m=+38.636274511" lastFinishedPulling="2025-09-10 23:45:04.760692198 +0000 UTC m=+41.445875294" observedRunningTime="2025-09-10 23:45:05.689502147 +0000 UTC m=+42.374685243" watchObservedRunningTime="2025-09-10 23:45:05.691634065 +0000 UTC m=+42.376817161" Sep 10 23:45:06.358824 systemd-networkd[1422]: cali65a92ab07a3: Gained IPv6LL Sep 10 23:45:06.869731 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Sep 10 23:45:07.082930 containerd[1523]: time="2025-09-10T23:45:07.082867369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:07.084654 containerd[1523]: time="2025-09-10T23:45:07.084612430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:45:07.085424 containerd[1523]: time="2025-09-10T23:45:07.085378856Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:07.087847 containerd[1523]: time="2025-09-10T23:45:07.087811061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:07.088419 containerd[1523]: time="2025-09-10T23:45:07.088353200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.327439633s" Sep 10 23:45:07.088419 containerd[1523]: time="2025-09-10T23:45:07.088414242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:45:07.089368 containerd[1523]: time="2025-09-10T23:45:07.089337874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:45:07.100741 containerd[1523]: time="2025-09-10T23:45:07.100694068Z" level=info msg="CreateContainer within sandbox \"564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:45:07.112894 containerd[1523]: time="2025-09-10T23:45:07.112846250Z" level=info msg="Container 6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:07.125860 containerd[1523]: time="2025-09-10T23:45:07.125741338Z" level=info msg="CreateContainer within sandbox \"564708736bdfcd672be524047d434def51493a1085ce7d10a6194f3c7f93073e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\"" Sep 10 23:45:07.127024 containerd[1523]: time="2025-09-10T23:45:07.126945819Z" level=info msg="StartContainer for \"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\"" Sep 10 23:45:07.128537 containerd[1523]: time="2025-09-10T23:45:07.128500753Z" level=info msg="connecting to shim 6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d" address="unix:///run/containerd/s/090abfdb33f5f181738377171c5aa2ce143d1fcc14ef4fe0ad21b398c59f28a7" protocol=ttrpc version=3 Sep 10 23:45:07.148821 systemd[1]: Started cri-containerd-6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d.scope - libcontainer container 6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d. Sep 10 23:45:07.186305 containerd[1523]: time="2025-09-10T23:45:07.186262838Z" level=info msg="StartContainer for \"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\" returns successfully" Sep 10 23:45:07.630711 kubelet[2640]: I0910 23:45:07.630650 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-766c76848b-jnpzb" podStartSLOduration=23.274158434 podStartE2EDuration="26.630631824s" podCreationTimestamp="2025-09-10 23:44:41 +0000 UTC" firstStartedPulling="2025-09-10 23:45:03.7327482 +0000 UTC m=+40.417931296" lastFinishedPulling="2025-09-10 23:45:07.08922159 +0000 UTC m=+43.774404686" observedRunningTime="2025-09-10 23:45:07.629426382 +0000 UTC m=+44.314609478" watchObservedRunningTime="2025-09-10 23:45:07.630631824 +0000 UTC m=+44.315814920" Sep 10 23:45:07.675581 containerd[1523]: time="2025-09-10T23:45:07.675537703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\" id:\"7ded70c77de30f91ddfc445cd20891077c94ccf851f1b830cd2d1aa5bbc2432a\" pid:5330 exited_at:{seconds:1757547907 nanos:675054206}" Sep 10 23:45:08.706619 kubelet[2640]: I0910 23:45:08.706522 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:45:09.152039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411875240.mount: Deactivated successfully. Sep 10 23:45:09.492325 containerd[1523]: time="2025-09-10T23:45:09.491762833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:09.492691 containerd[1523]: time="2025-09-10T23:45:09.492555899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:45:09.494948 containerd[1523]: time="2025-09-10T23:45:09.494920257Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:09.497332 containerd[1523]: time="2025-09-10T23:45:09.497208732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:45:09.499088 containerd[1523]: time="2025-09-10T23:45:09.498506375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.40913498s" Sep 10 23:45:09.499088 containerd[1523]: time="2025-09-10T23:45:09.498542176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:45:09.502329 containerd[1523]: time="2025-09-10T23:45:09.501927527Z" level=info msg="CreateContainer within sandbox \"b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:45:09.512639 containerd[1523]: time="2025-09-10T23:45:09.510716176Z" level=info msg="Container 2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:45:09.519870 containerd[1523]: time="2025-09-10T23:45:09.519673671Z" level=info msg="CreateContainer within sandbox \"b420b4ab88e08fe3fb8d46301876b7ac301e29eeec4013e3deda5ea7e90ccfd4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\"" Sep 10 23:45:09.520961 containerd[1523]: time="2025-09-10T23:45:09.520721985Z" level=info msg="StartContainer for \"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\"" Sep 10 23:45:09.521997 containerd[1523]: time="2025-09-10T23:45:09.521966026Z" level=info msg="connecting to shim 2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9" address="unix:///run/containerd/s/2e69fb1af95916901cfffdedc289df1d9793dabbf0c6b2c3e3bae75506a73d95" protocol=ttrpc version=3 Sep 10 23:45:09.570677 systemd[1]: Started cri-containerd-2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9.scope - libcontainer container 2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9. Sep 10 23:45:09.697110 containerd[1523]: time="2025-09-10T23:45:09.696982620Z" level=info msg="StartContainer for \"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\" returns successfully" Sep 10 23:45:10.433814 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:43648.service - OpenSSH per-connection server daemon (10.0.0.1:43648). Sep 10 23:45:10.512571 sshd[5390]: Accepted publickey for core from 10.0.0.1 port 43648 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:10.514312 sshd-session[5390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:10.519160 systemd-logind[1495]: New session 9 of user core. Sep 10 23:45:10.524770 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:45:10.649231 kubelet[2640]: I0910 23:45:10.649117 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-g4xj7" podStartSLOduration=24.780580694 podStartE2EDuration="29.649098672s" podCreationTimestamp="2025-09-10 23:44:41 +0000 UTC" firstStartedPulling="2025-09-10 23:45:04.631390203 +0000 UTC m=+41.316573299" lastFinishedPulling="2025-09-10 23:45:09.499908221 +0000 UTC m=+46.185091277" observedRunningTime="2025-09-10 23:45:10.64843373 +0000 UTC m=+47.333616866" watchObservedRunningTime="2025-09-10 23:45:10.649098672 +0000 UTC m=+47.334281768" Sep 10 23:45:10.706481 sshd[5392]: Connection closed by 10.0.0.1 port 43648 Sep 10 23:45:10.708012 sshd-session[5390]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:10.712350 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:43648.service: Deactivated successfully. Sep 10 23:45:10.714550 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:45:10.715410 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:45:10.717363 systemd-logind[1495]: Removed session 9. Sep 10 23:45:10.739897 containerd[1523]: time="2025-09-10T23:45:10.739859496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\" id:\"9a273a56767b4fe3c56108a08c76a8d49eae650e62041ddffda09531149c1054\" pid:5417 exit_status:1 exited_at:{seconds:1757547910 nanos:739425722}" Sep 10 23:45:11.727133 containerd[1523]: time="2025-09-10T23:45:11.727090348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\" id:\"45bfc53d3a03cc22a60b5c1d8575f6803301ae54222d1978f94a0f0554fea5ba\" pid:5453 exit_status:1 exited_at:{seconds:1757547911 nanos:726740177}" Sep 10 23:45:15.724674 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:43802.service - OpenSSH per-connection server daemon (10.0.0.1:43802). Sep 10 23:45:15.800237 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 43802 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:15.802472 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:15.809034 systemd-logind[1495]: New session 10 of user core. Sep 10 23:45:15.819866 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:45:15.980716 sshd[5475]: Connection closed by 10.0.0.1 port 43802 Sep 10 23:45:15.981102 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:15.992873 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:43802.service: Deactivated successfully. Sep 10 23:45:15.995690 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:45:15.997160 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:45:16.004533 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:43810.service - OpenSSH per-connection server daemon (10.0.0.1:43810). Sep 10 23:45:16.005586 systemd-logind[1495]: Removed session 10. Sep 10 23:45:16.060826 sshd[5491]: Accepted publickey for core from 10.0.0.1 port 43810 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:16.063852 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:16.068687 systemd-logind[1495]: New session 11 of user core. Sep 10 23:45:16.078812 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:45:16.275055 sshd[5493]: Connection closed by 10.0.0.1 port 43810 Sep 10 23:45:16.275508 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:16.289102 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:43810.service: Deactivated successfully. Sep 10 23:45:16.292129 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:45:16.294774 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:45:16.297930 systemd-logind[1495]: Removed session 11. Sep 10 23:45:16.301120 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:43820.service - OpenSSH per-connection server daemon (10.0.0.1:43820). Sep 10 23:45:16.365664 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 43820 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:16.366989 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:16.371043 systemd-logind[1495]: New session 12 of user core. Sep 10 23:45:16.382809 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:45:16.556068 sshd[5506]: Connection closed by 10.0.0.1 port 43820 Sep 10 23:45:16.556357 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:16.560034 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:45:16.560187 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:43820.service: Deactivated successfully. Sep 10 23:45:16.562236 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:45:16.565761 systemd-logind[1495]: Removed session 12. Sep 10 23:45:21.572056 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:53950.service - OpenSSH per-connection server daemon (10.0.0.1:53950). Sep 10 23:45:21.644751 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 53950 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:21.646041 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:21.650339 systemd-logind[1495]: New session 13 of user core. Sep 10 23:45:21.661784 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:45:21.889663 sshd[5526]: Connection closed by 10.0.0.1 port 53950 Sep 10 23:45:21.889888 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:21.893780 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:53950.service: Deactivated successfully. Sep 10 23:45:21.896861 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:45:21.898163 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:45:21.900065 systemd-logind[1495]: Removed session 13. Sep 10 23:45:25.591510 containerd[1523]: time="2025-09-10T23:45:25.591223859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" id:\"5a4d0f18daaf119643fcfb718577ff85360314343674bac6f5dfb4b6118aafd4\" pid:5552 exited_at:{seconds:1757547925 nanos:590921173}" Sep 10 23:45:26.905388 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:53970.service - OpenSSH per-connection server daemon (10.0.0.1:53970). Sep 10 23:45:26.974031 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 53970 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:26.975460 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:26.982667 systemd-logind[1495]: New session 14 of user core. Sep 10 23:45:26.997854 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:45:27.187203 sshd[5575]: Connection closed by 10.0.0.1 port 53970 Sep 10 23:45:27.187786 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:27.192066 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:53970.service: Deactivated successfully. Sep 10 23:45:27.194088 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:45:27.194910 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:45:27.196651 systemd-logind[1495]: Removed session 14. Sep 10 23:45:32.202868 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:43966.service - OpenSSH per-connection server daemon (10.0.0.1:43966). Sep 10 23:45:32.299107 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 43966 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:32.301614 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:32.307706 systemd-logind[1495]: New session 15 of user core. Sep 10 23:45:32.313146 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:45:32.518624 sshd[5596]: Connection closed by 10.0.0.1 port 43966 Sep 10 23:45:32.518932 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:32.522787 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:45:32.523036 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:43966.service: Deactivated successfully. Sep 10 23:45:32.526215 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:45:32.527661 systemd-logind[1495]: Removed session 15. Sep 10 23:45:34.955146 containerd[1523]: time="2025-09-10T23:45:34.955094142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\" id:\"5e905228b4f614bdd002fcff7bdf9234f57f86fb4b5fbdf5364e1a23d97b8b8c\" pid:5620 exited_at:{seconds:1757547934 nanos:954797977}" Sep 10 23:45:37.530857 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). Sep 10 23:45:37.586283 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:37.587834 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:37.591545 systemd-logind[1495]: New session 16 of user core. Sep 10 23:45:37.602784 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:45:37.661062 containerd[1523]: time="2025-09-10T23:45:37.661009570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6845bec66f1ebbdcd07936ffdb427949cedc09407eb6745c64201bc7356d778d\" id:\"91fd876fc04fc0e66b6bf48493314dc8f7eb476ee1bbf1542d6e81f29562025a\" pid:5645 exited_at:{seconds:1757547937 nanos:660767086}" Sep 10 23:45:37.800532 sshd[5633]: Connection closed by 10.0.0.1 port 43972 Sep 10 23:45:37.801243 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:37.810498 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:43972.service: Deactivated successfully. Sep 10 23:45:37.812189 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:45:37.812937 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:45:37.816109 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:43986.service - OpenSSH per-connection server daemon (10.0.0.1:43986). Sep 10 23:45:37.817199 systemd-logind[1495]: Removed session 16. Sep 10 23:45:37.873233 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 43986 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:37.875847 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:37.881795 systemd-logind[1495]: New session 17 of user core. Sep 10 23:45:37.888820 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:45:38.158676 sshd[5670]: Connection closed by 10.0.0.1 port 43986 Sep 10 23:45:38.160248 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:38.173087 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:43986.service: Deactivated successfully. Sep 10 23:45:38.176520 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:45:38.177579 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:45:38.181033 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Sep 10 23:45:38.182177 systemd-logind[1495]: Removed session 17. Sep 10 23:45:38.242052 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:38.243534 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:38.250066 systemd-logind[1495]: New session 18 of user core. Sep 10 23:45:38.256789 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:45:38.950062 sshd[5684]: Connection closed by 10.0.0.1 port 44000 Sep 10 23:45:38.950303 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:38.959514 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:44000.service: Deactivated successfully. Sep 10 23:45:38.964376 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:45:38.966640 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:45:38.972133 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:44050.service - OpenSSH per-connection server daemon (10.0.0.1:44050). Sep 10 23:45:38.974779 systemd-logind[1495]: Removed session 18. Sep 10 23:45:39.030627 sshd[5704]: Accepted publickey for core from 10.0.0.1 port 44050 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:39.032912 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:39.037305 systemd-logind[1495]: New session 19 of user core. Sep 10 23:45:39.045784 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:45:39.350078 sshd[5707]: Connection closed by 10.0.0.1 port 44050 Sep 10 23:45:39.350467 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:39.359421 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:44050.service: Deactivated successfully. Sep 10 23:45:39.361409 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:45:39.362349 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:45:39.364963 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:44084.service - OpenSSH per-connection server daemon (10.0.0.1:44084). Sep 10 23:45:39.370158 systemd-logind[1495]: Removed session 19. Sep 10 23:45:39.427220 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 44084 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:39.429182 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:39.437011 systemd-logind[1495]: New session 20 of user core. Sep 10 23:45:39.441801 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:45:39.608811 sshd[5720]: Connection closed by 10.0.0.1 port 44084 Sep 10 23:45:39.609092 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:39.613283 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:44084.service: Deactivated successfully. Sep 10 23:45:39.615081 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:45:39.616459 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:45:39.618266 systemd-logind[1495]: Removed session 20. Sep 10 23:45:41.742069 containerd[1523]: time="2025-09-10T23:45:41.741915361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2feb7985405149e02aa8b88bed2d6abff39d2c0b630547e775b8c0a6da1338a9\" id:\"5d456d84b7f4c5c9eefe3833e4425356fb75db713bf0b286561bd494db676862\" pid:5745 exited_at:{seconds:1757547941 nanos:741563396}" Sep 10 23:45:44.621226 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:57000.service - OpenSSH per-connection server daemon (10.0.0.1:57000). Sep 10 23:45:44.678088 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 57000 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:44.679468 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:44.689870 systemd-logind[1495]: New session 21 of user core. Sep 10 23:45:44.695854 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:45:44.846413 sshd[5763]: Connection closed by 10.0.0.1 port 57000 Sep 10 23:45:44.847407 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:44.852481 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:57000.service: Deactivated successfully. Sep 10 23:45:44.855176 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:45:44.856698 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:45:44.858916 systemd-logind[1495]: Removed session 21. Sep 10 23:45:49.873765 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Sep 10 23:45:49.937933 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:49.939428 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:49.943667 systemd-logind[1495]: New session 22 of user core. Sep 10 23:45:49.950757 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 23:45:50.098918 sshd[5787]: Connection closed by 10.0.0.1 port 57008 Sep 10 23:45:50.099410 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:50.103332 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:57008.service: Deactivated successfully. Sep 10 23:45:50.105415 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 23:45:50.108645 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. Sep 10 23:45:50.110222 systemd-logind[1495]: Removed session 22. Sep 10 23:45:55.112281 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:54108.service - OpenSSH per-connection server daemon (10.0.0.1:54108). Sep 10 23:45:55.177061 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 54108 ssh2: RSA SHA256:BCMCrC4Hd5TTK0KWc8c5xeS0p+QB+qGbTnvzVqNWMjs Sep 10 23:45:55.178860 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:45:55.183171 systemd-logind[1495]: New session 23 of user core. Sep 10 23:45:55.194801 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 23:45:55.398846 sshd[5802]: Connection closed by 10.0.0.1 port 54108 Sep 10 23:45:55.399218 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Sep 10 23:45:55.404227 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:54108.service: Deactivated successfully. Sep 10 23:45:55.408168 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 23:45:55.410058 systemd-logind[1495]: Session 23 logged out. Waiting for processes to exit. Sep 10 23:45:55.413806 systemd-logind[1495]: Removed session 23. Sep 10 23:45:55.617402 containerd[1523]: time="2025-09-10T23:45:55.617305421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f327b807f111ba0a8e9099bc5956d635873aef145f9ea532862071eb36dd1c41\" id:\"36c793f310d44b6af1bf3aa2c75ca85003a2eec2e80bc41a0f9a716898986649\" pid:5826 exited_at:{seconds:1757547955 nanos:616929817}"