Sep 3 23:21:59.766923 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 3 23:21:59.766944 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:21:59.766954 kernel: KASLR enabled Sep 3 23:21:59.766960 kernel: efi: EFI v2.7 by EDK II Sep 3 23:21:59.766965 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 3 23:21:59.766970 kernel: random: crng init done Sep 3 23:21:59.766977 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 3 23:21:59.766983 kernel: secureboot: Secure boot enabled Sep 3 23:21:59.766988 kernel: ACPI: Early table checksum verification disabled Sep 3 23:21:59.766995 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 3 23:21:59.767001 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 3 23:21:59.767007 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767012 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767018 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767025 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767033 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767038 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767045 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767051 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767057 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:21:59.767062 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 3 23:21:59.767068 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:21:59.767074 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:21:59.767080 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 3 23:21:59.767095 kernel: Zone ranges: Sep 3 23:21:59.767103 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:21:59.767119 kernel: DMA32 empty Sep 3 23:21:59.767126 kernel: Normal empty Sep 3 23:21:59.767132 kernel: Device empty Sep 3 23:21:59.767137 kernel: Movable zone start for each node Sep 3 23:21:59.767143 kernel: Early memory node ranges Sep 3 23:21:59.767149 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 3 23:21:59.767156 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 3 23:21:59.767162 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 3 23:21:59.767168 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 3 23:21:59.767174 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 3 23:21:59.767179 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 3 23:21:59.767187 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 3 23:21:59.767193 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 3 23:21:59.767199 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 3 23:21:59.767208 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:21:59.767214 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 3 23:21:59.767220 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 3 23:21:59.767227 kernel: psci: probing for conduit method from ACPI. Sep 3 23:21:59.767234 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:21:59.767241 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:21:59.767247 kernel: psci: Trusted OS migration not required Sep 3 23:21:59.767253 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:21:59.767259 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 3 23:21:59.767266 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:21:59.767272 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:21:59.767279 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 3 23:21:59.767285 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:21:59.767293 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:21:59.767299 kernel: CPU features: detected: Spectre-v4 Sep 3 23:21:59.767305 kernel: CPU features: detected: Spectre-BHB Sep 3 23:21:59.767311 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:21:59.767318 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:21:59.767324 kernel: CPU features: detected: ARM erratum 1418040 Sep 3 23:21:59.767330 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:21:59.767336 kernel: alternatives: applying boot alternatives Sep 3 23:21:59.767344 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:21:59.767350 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:21:59.767357 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:21:59.767364 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:21:59.767370 kernel: Fallback order for Node 0: 0 Sep 3 23:21:59.767377 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 3 23:21:59.767383 kernel: Policy zone: DMA Sep 3 23:21:59.767389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:21:59.767395 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 3 23:21:59.767402 kernel: software IO TLB: area num 4. Sep 3 23:21:59.767408 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 3 23:21:59.767414 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 3 23:21:59.767421 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 3 23:21:59.767427 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:21:59.767434 kernel: rcu: RCU event tracing is enabled. Sep 3 23:21:59.767442 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 3 23:21:59.767449 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:21:59.767455 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:21:59.767461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:21:59.767468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 3 23:21:59.767474 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:21:59.767481 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:21:59.767487 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:21:59.767493 kernel: GICv3: 256 SPIs implemented Sep 3 23:21:59.767500 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:21:59.767506 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:21:59.767514 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 3 23:21:59.767520 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:21:59.767526 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 3 23:21:59.767532 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 3 23:21:59.767539 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:21:59.767545 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:21:59.767552 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 3 23:21:59.767558 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 3 23:21:59.767564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:21:59.767571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:21:59.767577 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 3 23:21:59.767584 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 3 23:21:59.767592 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 3 23:21:59.767598 kernel: arm-pv: using stolen time PV Sep 3 23:21:59.767605 kernel: Console: colour dummy device 80x25 Sep 3 23:21:59.767611 kernel: ACPI: Core revision 20240827 Sep 3 23:21:59.767618 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 3 23:21:59.767624 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:21:59.767631 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:21:59.767637 kernel: landlock: Up and running. Sep 3 23:21:59.767643 kernel: SELinux: Initializing. Sep 3 23:21:59.767651 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:21:59.767658 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:21:59.767665 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:21:59.767671 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:21:59.767678 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:21:59.767684 kernel: Remapping and enabling EFI services. Sep 3 23:21:59.767691 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:21:59.767697 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:21:59.767704 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 3 23:21:59.767711 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 3 23:21:59.767722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:21:59.767729 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 3 23:21:59.767737 kernel: Detected PIPT I-cache on CPU2 Sep 3 23:21:59.767744 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 3 23:21:59.767751 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 3 23:21:59.767758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:21:59.767765 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 3 23:21:59.767772 kernel: Detected PIPT I-cache on CPU3 Sep 3 23:21:59.767780 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 3 23:21:59.767787 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 3 23:21:59.767794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:21:59.767800 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 3 23:21:59.767807 kernel: smp: Brought up 1 node, 4 CPUs Sep 3 23:21:59.767814 kernel: SMP: Total of 4 processors activated. Sep 3 23:21:59.767821 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:21:59.767828 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:21:59.767835 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:21:59.767843 kernel: CPU features: detected: Common not Private translations Sep 3 23:21:59.767850 kernel: CPU features: detected: CRC32 instructions Sep 3 23:21:59.767856 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 3 23:21:59.767863 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:21:59.767870 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:21:59.767877 kernel: CPU features: detected: Privileged Access Never Sep 3 23:21:59.767884 kernel: CPU features: detected: RAS Extension Support Sep 3 23:21:59.767891 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:21:59.767897 kernel: alternatives: applying system-wide alternatives Sep 3 23:21:59.767906 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 3 23:21:59.767913 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 3 23:21:59.767920 kernel: devtmpfs: initialized Sep 3 23:21:59.767927 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:21:59.767934 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 3 23:21:59.767941 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:21:59.767948 kernel: 0 pages in range for non-PLT usage Sep 3 23:21:59.767954 kernel: 508560 pages in range for PLT usage Sep 3 23:21:59.767961 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:21:59.767969 kernel: SMBIOS 3.0.0 present. Sep 3 23:21:59.767976 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 3 23:21:59.767983 kernel: DMI: Memory slots populated: 1/1 Sep 3 23:21:59.767990 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:21:59.767997 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:21:59.768004 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:21:59.768011 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:21:59.768018 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:21:59.768025 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 Sep 3 23:21:59.768033 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:21:59.768040 kernel: cpuidle: using governor menu Sep 3 23:21:59.768046 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:21:59.768053 kernel: ASID allocator initialised with 32768 entries Sep 3 23:21:59.768060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:21:59.768066 kernel: Serial: AMBA PL011 UART driver Sep 3 23:21:59.768073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:21:59.768080 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:21:59.768096 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:21:59.768105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:21:59.768126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:21:59.768133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:21:59.768140 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:21:59.768147 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:21:59.768154 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:21:59.768161 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:21:59.768167 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:21:59.768174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:21:59.768182 kernel: ACPI: Interpreter enabled Sep 3 23:21:59.768189 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:21:59.768196 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:21:59.768203 kernel: ACPI: CPU0 has been hot-added Sep 3 23:21:59.768209 kernel: ACPI: CPU1 has been hot-added Sep 3 23:21:59.768216 kernel: ACPI: CPU2 has been hot-added Sep 3 23:21:59.768223 kernel: ACPI: CPU3 has been hot-added Sep 3 23:21:59.768230 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:21:59.768237 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:21:59.768245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 3 23:21:59.768382 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:21:59.768449 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:21:59.768510 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:21:59.768568 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 3 23:21:59.768625 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 3 23:21:59.768634 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 3 23:21:59.768644 kernel: PCI host bridge to bus 0000:00 Sep 3 23:21:59.768709 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 3 23:21:59.768771 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:21:59.768825 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 3 23:21:59.768877 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 3 23:21:59.768954 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:21:59.769027 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 3 23:21:59.769181 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 3 23:21:59.769265 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 3 23:21:59.769328 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 3 23:21:59.769527 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 3 23:21:59.769670 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 3 23:21:59.769740 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 3 23:21:59.769806 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 3 23:21:59.769861 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:21:59.769915 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 3 23:21:59.769925 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:21:59.769933 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:21:59.769939 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:21:59.769946 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:21:59.769954 kernel: iommu: Default domain type: Translated Sep 3 23:21:59.769961 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:21:59.769970 kernel: efivars: Registered efivars operations Sep 3 23:21:59.769977 kernel: vgaarb: loaded Sep 3 23:21:59.769984 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:21:59.769991 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:21:59.769998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:21:59.770005 kernel: pnp: PnP ACPI init Sep 3 23:21:59.770081 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 3 23:21:59.770100 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:21:59.770122 kernel: NET: Registered PF_INET protocol family Sep 3 23:21:59.770129 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:21:59.770136 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:21:59.770143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:21:59.770151 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:21:59.770158 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:21:59.770165 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:21:59.770173 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:21:59.770180 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:21:59.770189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:21:59.770197 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:21:59.770204 kernel: kvm [1]: HYP mode not available Sep 3 23:21:59.770221 kernel: Initialise system trusted keyrings Sep 3 23:21:59.770228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:21:59.770235 kernel: Key type asymmetric registered Sep 3 23:21:59.770243 kernel: Asymmetric key parser 'x509' registered Sep 3 23:21:59.770251 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:21:59.770258 kernel: io scheduler mq-deadline registered Sep 3 23:21:59.770266 kernel: io scheduler kyber registered Sep 3 23:21:59.770273 kernel: io scheduler bfq registered Sep 3 23:21:59.770280 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:21:59.770287 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:21:59.770295 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:21:59.770365 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 3 23:21:59.770375 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:21:59.770382 kernel: thunder_xcv, ver 1.0 Sep 3 23:21:59.770389 kernel: thunder_bgx, ver 1.0 Sep 3 23:21:59.770397 kernel: nicpf, ver 1.0 Sep 3 23:21:59.770404 kernel: nicvf, ver 1.0 Sep 3 23:21:59.770477 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:21:59.770534 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:21:59 UTC (1756941719) Sep 3 23:21:59.770543 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:21:59.770550 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:21:59.770557 kernel: watchdog: NMI not fully supported Sep 3 23:21:59.770564 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:21:59.770573 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:21:59.770580 kernel: Segment Routing with IPv6 Sep 3 23:21:59.770587 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:21:59.770593 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:21:59.770600 kernel: Key type dns_resolver registered Sep 3 23:21:59.770607 kernel: registered taskstats version 1 Sep 3 23:21:59.770614 kernel: Loading compiled-in X.509 certificates Sep 3 23:21:59.770621 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:21:59.770628 kernel: Demotion targets for Node 0: null Sep 3 23:21:59.770636 kernel: Key type .fscrypt registered Sep 3 23:21:59.770643 kernel: Key type fscrypt-provisioning registered Sep 3 23:21:59.770649 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:21:59.770656 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:21:59.770663 kernel: ima: No architecture policies found Sep 3 23:21:59.770670 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:21:59.770677 kernel: clk: Disabling unused clocks Sep 3 23:21:59.770684 kernel: PM: genpd: Disabling unused power domains Sep 3 23:21:59.770691 kernel: Warning: unable to open an initial console. Sep 3 23:21:59.770700 kernel: Freeing unused kernel memory: 38976K Sep 3 23:21:59.770706 kernel: Run /init as init process Sep 3 23:21:59.770713 kernel: with arguments: Sep 3 23:21:59.770720 kernel: /init Sep 3 23:21:59.770727 kernel: with environment: Sep 3 23:21:59.770733 kernel: HOME=/ Sep 3 23:21:59.770740 kernel: TERM=linux Sep 3 23:21:59.770747 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:21:59.770754 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:21:59.770765 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:21:59.770773 systemd[1]: Detected virtualization kvm. Sep 3 23:21:59.770781 systemd[1]: Detected architecture arm64. Sep 3 23:21:59.770788 systemd[1]: Running in initrd. Sep 3 23:21:59.770795 systemd[1]: No hostname configured, using default hostname. Sep 3 23:21:59.770802 systemd[1]: Hostname set to . Sep 3 23:21:59.770810 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:21:59.770818 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:21:59.770826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:21:59.770833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:21:59.770841 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:21:59.770848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:21:59.770856 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:21:59.770864 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:21:59.770874 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:21:59.770882 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:21:59.770889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:21:59.770896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:21:59.770904 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:21:59.770911 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:21:59.770918 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:21:59.770926 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:21:59.770935 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:21:59.770942 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:21:59.770950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:21:59.770957 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:21:59.770965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:21:59.770972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:21:59.770980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:21:59.770988 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:21:59.770996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:21:59.771005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:21:59.771012 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:21:59.771020 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:21:59.771028 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:21:59.771036 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:21:59.771043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:21:59.771051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:21:59.771058 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:21:59.771068 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:21:59.771076 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:21:59.771158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:21:59.771306 systemd-journald[243]: Collecting audit messages is disabled. Sep 3 23:21:59.771420 systemd-journald[243]: Journal started Sep 3 23:21:59.771441 systemd-journald[243]: Runtime Journal (/run/log/journal/cb404b776e4d4de29c2beeaf3c226bcc) is 6M, max 48.5M, 42.4M free. Sep 3 23:21:59.759444 systemd-modules-load[245]: Inserted module 'overlay' Sep 3 23:21:59.775546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:21:59.775568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:21:59.776931 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 3 23:21:59.778638 kernel: Bridge firewalling registered Sep 3 23:21:59.778659 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:21:59.779847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:21:59.782229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:21:59.785604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:21:59.787184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:21:59.789277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:21:59.802363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:21:59.807283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:21:59.810927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:21:59.816219 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:21:59.819754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:21:59.821018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:21:59.823639 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:21:59.827122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:21:59.852712 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:21:59.866690 systemd-resolved[293]: Positive Trust Anchors: Sep 3 23:21:59.866712 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:21:59.866755 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:21:59.871834 systemd-resolved[293]: Defaulting to hostname 'linux'. Sep 3 23:21:59.872910 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:21:59.874911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:21:59.924141 kernel: SCSI subsystem initialized Sep 3 23:21:59.929126 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:21:59.937152 kernel: iscsi: registered transport (tcp) Sep 3 23:21:59.949125 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:21:59.949155 kernel: QLogic iSCSI HBA Driver Sep 3 23:21:59.965487 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:21:59.989476 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:21:59.991360 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:22:00.036674 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:22:00.039003 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:22:00.098158 kernel: raid6: neonx8 gen() 15394 MB/s Sep 3 23:22:00.115129 kernel: raid6: neonx4 gen() 15673 MB/s Sep 3 23:22:00.132142 kernel: raid6: neonx2 gen() 13064 MB/s Sep 3 23:22:00.149129 kernel: raid6: neonx1 gen() 10435 MB/s Sep 3 23:22:00.166134 kernel: raid6: int64x8 gen() 6865 MB/s Sep 3 23:22:00.183131 kernel: raid6: int64x4 gen() 7299 MB/s Sep 3 23:22:00.200129 kernel: raid6: int64x2 gen() 6049 MB/s Sep 3 23:22:00.217132 kernel: raid6: int64x1 gen() 4999 MB/s Sep 3 23:22:00.217150 kernel: raid6: using algorithm neonx4 gen() 15673 MB/s Sep 3 23:22:00.234146 kernel: raid6: .... xor() 12250 MB/s, rmw enabled Sep 3 23:22:00.234173 kernel: raid6: using neon recovery algorithm Sep 3 23:22:00.239309 kernel: xor: measuring software checksum speed Sep 3 23:22:00.239332 kernel: 8regs : 21567 MB/sec Sep 3 23:22:00.240414 kernel: 32regs : 21687 MB/sec Sep 3 23:22:00.240427 kernel: arm64_neon : 28109 MB/sec Sep 3 23:22:00.240436 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 3 23:22:00.293141 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:22:00.299470 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:22:00.301680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:22:00.327590 systemd-udevd[501]: Using default interface naming scheme 'v255'. Sep 3 23:22:00.332491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:22:00.334186 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:22:00.357589 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Sep 3 23:22:00.379580 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:22:00.381685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:22:00.433014 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:22:00.435469 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:22:00.489122 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 3 23:22:00.489299 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 3 23:22:00.492260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:22:00.496796 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:22:00.496817 kernel: GPT:9289727 != 19775487 Sep 3 23:22:00.496827 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:22:00.496835 kernel: GPT:9289727 != 19775487 Sep 3 23:22:00.492378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:00.501965 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:22:00.501984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:00.496965 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:00.503560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:00.531157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:00.538658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 3 23:22:00.539810 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:22:00.547353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 3 23:22:00.548313 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 3 23:22:00.556315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:22:00.563526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 3 23:22:00.564516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:22:00.566318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:22:00.568011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:22:00.570537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:22:00.572076 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:22:00.587720 disk-uuid[594]: Primary Header is updated. Sep 3 23:22:00.587720 disk-uuid[594]: Secondary Entries is updated. Sep 3 23:22:00.587720 disk-uuid[594]: Secondary Header is updated. Sep 3 23:22:00.592155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:00.594421 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:22:01.600063 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:22:01.600134 disk-uuid[597]: The operation has completed successfully. Sep 3 23:22:01.627368 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:22:01.627492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:22:01.653359 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:22:01.674266 sh[614]: Success Sep 3 23:22:01.686307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:22:01.686360 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:22:01.687416 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:22:01.694130 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:22:01.722088 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:22:01.724490 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:22:01.735774 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:22:01.742559 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (626) Sep 3 23:22:01.742601 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:22:01.742612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:01.747134 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:22:01.747191 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:22:01.747904 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:22:01.749076 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:22:01.750058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:22:01.750873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:22:01.753587 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:22:01.773714 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Sep 3 23:22:01.773751 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:01.773762 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:01.776497 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:22:01.776533 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:22:01.781243 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:01.781607 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:22:01.783760 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:22:01.843797 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:22:01.847971 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:22:01.881123 ignition[706]: Ignition 2.21.0 Sep 3 23:22:01.881140 ignition[706]: Stage: fetch-offline Sep 3 23:22:01.881184 ignition[706]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:01.881192 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:01.881367 ignition[706]: parsed url from cmdline: "" Sep 3 23:22:01.881370 ignition[706]: no config URL provided Sep 3 23:22:01.881374 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:22:01.881381 ignition[706]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:22:01.881400 ignition[706]: op(1): [started] loading QEMU firmware config module Sep 3 23:22:01.881404 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 3 23:22:01.889686 systemd-networkd[806]: lo: Link UP Sep 3 23:22:01.889700 systemd-networkd[806]: lo: Gained carrier Sep 3 23:22:01.889919 ignition[706]: op(1): [finished] loading QEMU firmware config module Sep 3 23:22:01.890460 systemd-networkd[806]: Enumeration completed Sep 3 23:22:01.890570 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:22:01.890879 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:01.890882 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:22:01.891786 systemd-networkd[806]: eth0: Link UP Sep 3 23:22:01.891875 systemd-networkd[806]: eth0: Gained carrier Sep 3 23:22:01.891883 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:01.892595 systemd[1]: Reached target network.target - Network. Sep 3 23:22:01.913167 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:22:01.944340 ignition[706]: parsing config with SHA512: 8fdafbd03284c94c12897dde96a646839b5810b3805b70a44a068ce45ed588678d9500b77191f42fe77984379c674a70c041aaefef7b9cfcf048a60349fba020 Sep 3 23:22:01.949196 unknown[706]: fetched base config from "system" Sep 3 23:22:01.949216 unknown[706]: fetched user config from "qemu" Sep 3 23:22:01.949595 ignition[706]: fetch-offline: fetch-offline passed Sep 3 23:22:01.949654 ignition[706]: Ignition finished successfully Sep 3 23:22:01.953296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:22:01.954436 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 3 23:22:01.956696 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:22:01.983714 ignition[814]: Ignition 2.21.0 Sep 3 23:22:01.983731 ignition[814]: Stage: kargs Sep 3 23:22:01.983894 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:01.983903 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:01.985733 ignition[814]: kargs: kargs passed Sep 3 23:22:01.985809 ignition[814]: Ignition finished successfully Sep 3 23:22:01.988781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:22:01.991238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:22:02.022536 ignition[822]: Ignition 2.21.0 Sep 3 23:22:02.022550 ignition[822]: Stage: disks Sep 3 23:22:02.022688 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:02.022696 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:02.027925 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:22:02.025340 ignition[822]: disks: disks passed Sep 3 23:22:02.029133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:22:02.025399 ignition[822]: Ignition finished successfully Sep 3 23:22:02.030601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:22:02.032068 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:22:02.033752 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:22:02.035072 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:22:02.038859 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:22:02.061758 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:22:02.067937 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:22:02.070277 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:22:02.136133 kernel: EXT4-fs (vda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:22:02.136489 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:22:02.137618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:22:02.139855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:22:02.141507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:22:02.142377 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:22:02.142418 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:22:02.142444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:22:02.154414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:22:02.156861 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:22:02.159774 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 3 23:22:02.161962 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:02.161996 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:02.164518 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:22:02.164551 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:22:02.165766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:22:02.194793 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:22:02.199376 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:22:02.203086 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:22:02.207940 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:22:02.276709 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:22:02.278639 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:22:02.280102 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:22:02.295212 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:02.306415 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:22:02.317988 ignition[955]: INFO : Ignition 2.21.0 Sep 3 23:22:02.317988 ignition[955]: INFO : Stage: mount Sep 3 23:22:02.319650 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:02.319650 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:02.321187 ignition[955]: INFO : mount: mount passed Sep 3 23:22:02.322742 ignition[955]: INFO : Ignition finished successfully Sep 3 23:22:02.323490 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:22:02.325156 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:22:02.867666 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:22:02.869179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:22:02.894466 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 3 23:22:02.894506 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:22:02.894517 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:22:02.897355 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:22:02.897377 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:22:02.898736 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:22:02.925759 ignition[985]: INFO : Ignition 2.21.0 Sep 3 23:22:02.925759 ignition[985]: INFO : Stage: files Sep 3 23:22:02.927999 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:02.927999 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:02.927999 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:22:02.931231 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:22:02.931231 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:22:02.931231 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:22:02.931231 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:22:02.931231 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:22:02.930662 unknown[985]: wrote ssh authorized keys file for user: core Sep 3 23:22:02.937737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:22:02.937737 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 3 23:22:03.057359 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:22:03.437320 systemd-networkd[806]: eth0: Gained IPv6LL Sep 3 23:22:03.740666 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 3 23:22:03.740666 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:22:03.744085 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:22:03.756671 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:22:03.758467 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:22:03.758467 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:22:03.762280 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:22:03.762280 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:22:03.762280 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 3 23:22:04.370657 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 3 23:22:04.717826 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 3 23:22:04.717826 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 3 23:22:04.720879 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:22:04.752960 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:22:04.752960 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 3 23:22:04.752960 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 3 23:22:04.756880 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:22:04.756880 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:22:04.756880 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 3 23:22:04.756880 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 3 23:22:04.766902 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:22:04.770433 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:22:04.771640 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 3 23:22:04.771640 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:22:04.771640 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:22:04.776336 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:22:04.776336 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:22:04.776336 ignition[985]: INFO : files: files passed Sep 3 23:22:04.776336 ignition[985]: INFO : Ignition finished successfully Sep 3 23:22:04.773297 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:22:04.776612 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:22:04.778956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:22:04.787062 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:22:04.787201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:22:04.789479 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Sep 3 23:22:04.790736 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:22:04.790736 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:22:04.793162 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:22:04.792880 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:22:04.794220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:22:04.796652 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:22:04.822677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:22:04.822783 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:22:04.824530 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:22:04.825301 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:22:04.826041 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:22:04.826805 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:22:04.841253 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:22:04.845226 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:22:04.864699 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:22:04.865784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:22:04.867401 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:22:04.868885 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:22:04.869001 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:22:04.871060 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:22:04.872788 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:22:04.874012 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:22:04.875361 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:22:04.877103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:22:04.878779 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:22:04.880293 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:22:04.881883 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:22:04.883442 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:22:04.884955 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:22:04.886527 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:22:04.887790 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:22:04.887914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:22:04.889785 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:22:04.891336 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:22:04.893016 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:22:04.896181 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:22:04.897249 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:22:04.897366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:22:04.899695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:22:04.899804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:22:04.901387 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:22:04.902720 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:22:04.906179 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:22:04.907181 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:22:04.908875 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:22:04.910091 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:22:04.910192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:22:04.911415 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:22:04.911505 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:22:04.912748 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:22:04.912868 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:22:04.914211 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:22:04.914307 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:22:04.916323 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:22:04.918462 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:22:04.919190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:22:04.919302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:22:04.920729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:22:04.920834 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:22:04.925392 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:22:04.929330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:22:04.937457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:22:04.942496 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:22:04.942602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:22:04.947214 ignition[1041]: INFO : Ignition 2.21.0 Sep 3 23:22:04.947214 ignition[1041]: INFO : Stage: umount Sep 3 23:22:04.947214 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:22:04.947214 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:22:04.947214 ignition[1041]: INFO : umount: umount passed Sep 3 23:22:04.947214 ignition[1041]: INFO : Ignition finished successfully Sep 3 23:22:04.947884 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:22:04.949153 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:22:04.950873 systemd[1]: Stopped target network.target - Network. Sep 3 23:22:04.951825 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:22:04.951880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:22:04.953162 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:22:04.953211 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:22:04.954506 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:22:04.954550 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:22:04.955873 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:22:04.955908 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:22:04.957168 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:22:04.957210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:22:04.958683 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:22:04.959903 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:22:04.968579 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:22:04.968683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:22:04.971639 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:22:04.971909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:22:04.971947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:22:04.975664 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:22:04.975858 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:22:04.975952 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:22:04.978692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:22:04.979033 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:22:04.980050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:22:04.980100 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:22:04.982666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:22:04.983870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:22:04.983920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:22:04.985803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:22:04.985844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:22:04.987866 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:22:04.987904 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:22:04.989523 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:22:04.992892 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:22:05.004500 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:22:05.004641 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:22:05.006304 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:22:05.007741 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:22:05.009049 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:22:05.009149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:22:05.011246 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:22:05.011281 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:22:05.012688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:22:05.012731 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:22:05.014858 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:22:05.014903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:22:05.016928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:22:05.016977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:22:05.019905 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:22:05.021693 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:22:05.021750 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:22:05.024168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:22:05.024216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:22:05.026997 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 3 23:22:05.027060 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:22:05.029861 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:22:05.029902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:22:05.032054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:22:05.032123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:05.036243 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:22:05.036329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:22:05.037413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:22:05.039434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:22:05.066992 systemd[1]: Switching root. Sep 3 23:22:05.116167 systemd-journald[243]: Journal stopped Sep 3 23:22:05.888830 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 3 23:22:05.888878 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:22:05.888893 kernel: SELinux: policy capability open_perms=1 Sep 3 23:22:05.888902 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:22:05.888912 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:22:05.888923 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:22:05.888934 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:22:05.888944 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:22:05.888954 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:22:05.888964 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:22:05.888973 kernel: audit: type=1403 audit(1756941725.314:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:22:05.888986 systemd[1]: Successfully loaded SELinux policy in 50.537ms. Sep 3 23:22:05.889006 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.225ms. Sep 3 23:22:05.889018 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:22:05.889029 systemd[1]: Detected virtualization kvm. Sep 3 23:22:05.889039 systemd[1]: Detected architecture arm64. Sep 3 23:22:05.889049 systemd[1]: Detected first boot. Sep 3 23:22:05.889060 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:22:05.889084 zram_generator::config[1085]: No configuration found. Sep 3 23:22:05.889096 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:22:05.889118 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:22:05.889133 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:22:05.889143 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:22:05.889154 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:22:05.889169 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:22:05.889179 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:22:05.889190 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:22:05.889200 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:22:05.889211 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:22:05.889222 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:22:05.889233 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:22:05.889244 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:22:05.889254 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:22:05.889265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:22:05.889275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:22:05.889285 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:22:05.889296 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:22:05.889306 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:22:05.889318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:22:05.889330 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:22:05.889340 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:22:05.889351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:22:05.889360 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:22:05.889371 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:22:05.889381 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:22:05.889392 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:22:05.889405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:22:05.889416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:22:05.889427 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:22:05.889437 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:22:05.889448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:22:05.889459 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:22:05.889469 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:22:05.889479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:22:05.889489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:22:05.889501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:22:05.889511 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:22:05.889521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:22:05.889532 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:22:05.889542 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:22:05.889553 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:22:05.889564 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:22:05.889576 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:22:05.889587 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:22:05.889599 systemd[1]: Reached target machines.target - Containers. Sep 3 23:22:05.889609 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:22:05.889619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:22:05.889633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:22:05.889644 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:22:05.889655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:22:05.889665 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:22:05.889676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:22:05.889688 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:22:05.889699 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:22:05.889709 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:22:05.889719 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:22:05.889729 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:22:05.889740 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:22:05.889750 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:22:05.889762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:22:05.889774 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:22:05.889785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:22:05.889796 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:22:05.889805 kernel: loop: module loaded Sep 3 23:22:05.889815 kernel: ACPI: bus type drm_connector registered Sep 3 23:22:05.889825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:22:05.889835 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:22:05.889845 kernel: fuse: init (API version 7.41) Sep 3 23:22:05.889855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:22:05.889867 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:22:05.889878 systemd[1]: Stopped verity-setup.service. Sep 3 23:22:05.889888 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:22:05.889898 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:22:05.889908 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:22:05.889919 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:22:05.889930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:22:05.889941 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:22:05.889951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:22:05.889963 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:22:05.889974 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:22:05.889985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:22:05.889995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:22:05.890005 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:22:05.890015 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:22:05.890025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:22:05.890057 systemd-journald[1150]: Collecting audit messages is disabled. Sep 3 23:22:05.890087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:22:05.890101 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:22:05.890143 systemd-journald[1150]: Journal started Sep 3 23:22:05.890166 systemd-journald[1150]: Runtime Journal (/run/log/journal/cb404b776e4d4de29c2beeaf3c226bcc) is 6M, max 48.5M, 42.4M free. Sep 3 23:22:05.672819 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:22:05.696180 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 3 23:22:05.696559 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:22:05.891402 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:22:05.893245 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:22:05.894881 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:22:05.895047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:22:05.896289 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:22:05.897581 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:22:05.898914 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:22:05.900380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:22:05.901866 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:22:05.913933 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:22:05.916289 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:22:05.918015 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:22:05.918997 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:22:05.919025 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:22:05.920863 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:22:05.928855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:22:05.929832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:22:05.931213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:22:05.932824 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:22:05.933946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:22:05.937242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:22:05.938528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:22:05.940221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:22:05.942030 systemd-journald[1150]: Time spent on flushing to /var/log/journal/cb404b776e4d4de29c2beeaf3c226bcc is 11.376ms for 882 entries. Sep 3 23:22:05.942030 systemd-journald[1150]: System Journal (/var/log/journal/cb404b776e4d4de29c2beeaf3c226bcc) is 8M, max 195.6M, 187.6M free. Sep 3 23:22:05.966435 systemd-journald[1150]: Received client request to flush runtime journal. Sep 3 23:22:05.942760 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:22:05.949355 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:22:05.954144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:22:05.957585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:22:05.959574 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:22:05.968188 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:22:05.972431 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:22:05.974596 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:22:05.977363 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:22:05.978601 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 3 23:22:05.978620 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 3 23:22:05.979192 kernel: loop0: detected capacity change from 0 to 138376 Sep 3 23:22:05.981651 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:22:05.990303 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:22:05.997095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:22:05.995570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:22:06.014218 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:22:06.022139 kernel: loop1: detected capacity change from 0 to 107312 Sep 3 23:22:06.030358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:22:06.033880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:22:06.048141 kernel: loop2: detected capacity change from 0 to 203944 Sep 3 23:22:06.057559 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 3 23:22:06.057580 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 3 23:22:06.063141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:22:06.071174 kernel: loop3: detected capacity change from 0 to 138376 Sep 3 23:22:06.079186 kernel: loop4: detected capacity change from 0 to 107312 Sep 3 23:22:06.087178 kernel: loop5: detected capacity change from 0 to 203944 Sep 3 23:22:06.092275 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 3 23:22:06.092675 (sd-merge)[1229]: Merged extensions into '/usr'. Sep 3 23:22:06.096441 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:22:06.096460 systemd[1]: Reloading... Sep 3 23:22:06.159144 zram_generator::config[1258]: No configuration found. Sep 3 23:22:06.211514 ldconfig[1197]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:22:06.237672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:22:06.299457 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:22:06.299901 systemd[1]: Reloading finished in 203 ms. Sep 3 23:22:06.318143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:22:06.319386 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:22:06.334386 systemd[1]: Starting ensure-sysext.service... Sep 3 23:22:06.335975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:22:06.345351 systemd[1]: Reload requested from client PID 1289 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:22:06.345367 systemd[1]: Reloading... Sep 3 23:22:06.354609 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:22:06.354641 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:22:06.354869 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:22:06.355049 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:22:06.355680 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:22:06.355885 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 3 23:22:06.355933 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Sep 3 23:22:06.358503 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:22:06.358517 systemd-tmpfiles[1290]: Skipping /boot Sep 3 23:22:06.367529 systemd-tmpfiles[1290]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:22:06.367550 systemd-tmpfiles[1290]: Skipping /boot Sep 3 23:22:06.398128 zram_generator::config[1320]: No configuration found. Sep 3 23:22:06.462992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:22:06.524788 systemd[1]: Reloading finished in 179 ms. Sep 3 23:22:06.544558 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:22:06.549847 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:22:06.559338 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:22:06.561834 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:22:06.563811 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:22:06.578235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:22:06.581231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:22:06.583044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:22:06.588864 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:22:06.591198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:22:06.597720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:22:06.602321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:22:06.605489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:22:06.606579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:22:06.606687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:22:06.609134 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:22:06.610699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:22:06.610842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:22:06.613650 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:22:06.613782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:22:06.619693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:22:06.621781 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:22:06.623804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:22:06.624061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:22:06.632139 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:22:06.633607 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:22:06.634830 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 3 23:22:06.637966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:22:06.640017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:22:06.642738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:22:06.645411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:22:06.647289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:22:06.649201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:22:06.649319 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:22:06.649427 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:22:06.657420 systemd[1]: Finished ensure-sysext.service. Sep 3 23:22:06.658538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:22:06.661560 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:22:06.662824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:22:06.662975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:22:06.665469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:22:06.666602 augenrules[1398]: No rules Sep 3 23:22:06.668430 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:22:06.668612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:22:06.669635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:22:06.669776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:22:06.671026 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:22:06.671198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:22:06.682331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:22:06.683552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:22:06.683608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:22:06.685329 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 3 23:22:06.686572 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:22:06.686770 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:22:06.727315 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:22:06.764627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:22:06.770074 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:22:06.787182 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:22:06.824406 systemd-networkd[1437]: lo: Link UP Sep 3 23:22:06.824414 systemd-networkd[1437]: lo: Gained carrier Sep 3 23:22:06.825389 systemd-networkd[1437]: Enumeration completed Sep 3 23:22:06.825490 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:22:06.825789 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:06.825800 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:22:06.826289 systemd-networkd[1437]: eth0: Link UP Sep 3 23:22:06.826395 systemd-networkd[1437]: eth0: Gained carrier Sep 3 23:22:06.826412 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:22:06.833251 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:22:06.835145 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:22:06.840656 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 3 23:22:06.841668 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:22:06.843830 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:22:06.844272 systemd-resolved[1356]: Positive Trust Anchors: Sep 3 23:22:06.844287 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:22:06.844319 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:22:06.844472 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Sep 3 23:22:06.846972 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 3 23:22:06.847030 systemd-timesyncd[1438]: Initial clock synchronization to Wed 2025-09-03 23:22:06.942092 UTC. Sep 3 23:22:06.854976 systemd-resolved[1356]: Defaulting to hostname 'linux'. Sep 3 23:22:06.858297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:22:06.859662 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:22:06.860930 systemd[1]: Reached target network.target - Network. Sep 3 23:22:06.863213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:22:06.865218 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:22:06.866155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:22:06.869213 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:22:06.870382 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:22:06.871350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:22:06.874166 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:22:06.875163 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:22:06.875193 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:22:06.875896 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:22:06.878267 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:22:06.881451 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:22:06.885368 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:22:06.887372 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:22:06.888339 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:22:06.896576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:22:06.897808 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:22:06.899340 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:22:06.905769 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:22:06.906553 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:22:06.907272 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:22:06.907304 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:22:06.908240 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:22:06.909906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:22:06.911570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:22:06.915747 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:22:06.918040 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:22:06.918905 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:22:06.919809 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:22:06.921661 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:22:06.922811 jq[1475]: false Sep 3 23:22:06.925889 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:22:06.928608 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:22:06.930905 extend-filesystems[1476]: Found /dev/vda6 Sep 3 23:22:06.933308 extend-filesystems[1476]: Found /dev/vda9 Sep 3 23:22:06.934628 extend-filesystems[1476]: Checking size of /dev/vda9 Sep 3 23:22:06.940710 extend-filesystems[1476]: Resized partition /dev/vda9 Sep 3 23:22:06.944281 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:22:06.945221 extend-filesystems[1495]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:22:06.946484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:22:06.949198 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:22:06.949936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:22:06.952152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 3 23:22:06.957594 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:22:06.960345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:22:06.965039 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:22:06.966992 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:22:06.967432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:22:06.967715 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:22:06.988513 jq[1503]: true Sep 3 23:22:06.968090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:22:06.973056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:22:06.973498 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:22:06.989388 jq[1508]: true Sep 3 23:22:07.002613 update_engine[1500]: I20250903 23:22:07.002053 1500 main.cc:92] Flatcar Update Engine starting Sep 3 23:22:07.017061 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 3 23:22:07.017111 tar[1507]: linux-arm64/helm Sep 3 23:22:07.017493 dbus-daemon[1473]: [system] SELinux support is enabled Sep 3 23:22:07.017632 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:22:07.020386 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 3 23:22:07.020386 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:22:07.020386 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 3 23:22:07.020081 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:22:07.036989 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Sep 3 23:22:07.039235 update_engine[1500]: I20250903 23:22:07.027076 1500 update_check_scheduler.cc:74] Next update check in 5m44s Sep 3 23:22:07.020337 systemd-logind[1496]: New seat seat0. Sep 3 23:22:07.022178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:22:07.022218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:22:07.030312 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:22:07.030331 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:22:07.031935 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:22:07.036238 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:22:07.037910 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:22:07.040169 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:22:07.046422 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:22:07.048487 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:22:07.050498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:22:07.053681 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:22:07.054917 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:22:07.057337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:22:07.098441 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:22:07.195557 containerd[1524]: time="2025-09-03T23:22:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:22:07.198585 containerd[1524]: time="2025-09-03T23:22:07.198460393Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:22:07.209750 containerd[1524]: time="2025-09-03T23:22:07.209702266Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.403µs" Sep 3 23:22:07.209750 containerd[1524]: time="2025-09-03T23:22:07.209739869Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:22:07.209857 containerd[1524]: time="2025-09-03T23:22:07.209759257Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:22:07.210005 containerd[1524]: time="2025-09-03T23:22:07.209907240Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:22:07.210005 containerd[1524]: time="2025-09-03T23:22:07.209929259Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:22:07.210005 containerd[1524]: time="2025-09-03T23:22:07.209952169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210068 containerd[1524]: time="2025-09-03T23:22:07.210010617Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210068 containerd[1524]: time="2025-09-03T23:22:07.210022356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210314 containerd[1524]: time="2025-09-03T23:22:07.210285292Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210314 containerd[1524]: time="2025-09-03T23:22:07.210308364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210369 containerd[1524]: time="2025-09-03T23:22:07.210319616Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210369 containerd[1524]: time="2025-09-03T23:22:07.210328278Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210423 containerd[1524]: time="2025-09-03T23:22:07.210405872Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210680 containerd[1524]: time="2025-09-03T23:22:07.210599553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210680 containerd[1524]: time="2025-09-03T23:22:07.210633958Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:22:07.210680 containerd[1524]: time="2025-09-03T23:22:07.210644887Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:22:07.210792 containerd[1524]: time="2025-09-03T23:22:07.210771862Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:22:07.211221 containerd[1524]: time="2025-09-03T23:22:07.211041397Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:22:07.211221 containerd[1524]: time="2025-09-03T23:22:07.211131093Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:22:07.214530 containerd[1524]: time="2025-09-03T23:22:07.214495798Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:22:07.214586 containerd[1524]: time="2025-09-03T23:22:07.214544330Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:22:07.214586 containerd[1524]: time="2025-09-03T23:22:07.214557485Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:22:07.214586 containerd[1524]: time="2025-09-03T23:22:07.214568333Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:22:07.214586 containerd[1524]: time="2025-09-03T23:22:07.214580678Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:22:07.214681 containerd[1524]: time="2025-09-03T23:22:07.214601685Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:22:07.214681 containerd[1524]: time="2025-09-03T23:22:07.214630545Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:22:07.214681 containerd[1524]: time="2025-09-03T23:22:07.214657827Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:22:07.214681 containerd[1524]: time="2025-09-03T23:22:07.214670334Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:22:07.214681 containerd[1524]: time="2025-09-03T23:22:07.214680939Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:22:07.214761 containerd[1524]: time="2025-09-03T23:22:07.214690694Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:22:07.214761 containerd[1524]: time="2025-09-03T23:22:07.214703241Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214826169Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214856608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214873608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214884132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214894291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214908984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214920115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214929749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214940759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214951283Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:22:07.215001 containerd[1524]: time="2025-09-03T23:22:07.214964114Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:22:07.215216 containerd[1524]: time="2025-09-03T23:22:07.215158402Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:22:07.215216 containerd[1524]: time="2025-09-03T23:22:07.215174633Z" level=info msg="Start snapshots syncer" Sep 3 23:22:07.215216 containerd[1524]: time="2025-09-03T23:22:07.215204221Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:22:07.215544 containerd[1524]: time="2025-09-03T23:22:07.215426236Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:22:07.215544 containerd[1524]: time="2025-09-03T23:22:07.215484603Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:22:07.215724 containerd[1524]: time="2025-09-03T23:22:07.215555397Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:22:07.215724 containerd[1524]: time="2025-09-03T23:22:07.215691763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:22:07.215724 containerd[1524]: time="2025-09-03T23:22:07.215715685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:22:07.215776 containerd[1524]: time="2025-09-03T23:22:07.215725723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:22:07.215776 containerd[1524]: time="2025-09-03T23:22:07.215738028Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:22:07.215776 containerd[1524]: time="2025-09-03T23:22:07.215750130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:22:07.215776 containerd[1524]: time="2025-09-03T23:22:07.215761747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:22:07.215776 containerd[1524]: time="2025-09-03T23:22:07.215771785Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:22:07.215858 containerd[1524]: time="2025-09-03T23:22:07.215794331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:22:07.215858 containerd[1524]: time="2025-09-03T23:22:07.215804976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:22:07.215858 containerd[1524]: time="2025-09-03T23:22:07.215815257Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:22:07.215858 containerd[1524]: time="2025-09-03T23:22:07.215849784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215863263Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215871925Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215880789Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215888439Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215897506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:22:07.215922 containerd[1524]: time="2025-09-03T23:22:07.215907544Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:22:07.216016 containerd[1524]: time="2025-09-03T23:22:07.215982750Z" level=info msg="runtime interface created" Sep 3 23:22:07.216016 containerd[1524]: time="2025-09-03T23:22:07.215988336Z" level=info msg="created NRI interface" Sep 3 23:22:07.216016 containerd[1524]: time="2025-09-03T23:22:07.215998981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:22:07.216016 containerd[1524]: time="2025-09-03T23:22:07.216009464Z" level=info msg="Connect containerd service" Sep 3 23:22:07.216080 containerd[1524]: time="2025-09-03T23:22:07.216032981Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:22:07.216938 containerd[1524]: time="2025-09-03T23:22:07.216899668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:22:07.299869 containerd[1524]: time="2025-09-03T23:22:07.299773008Z" level=info msg="Start subscribing containerd event" Sep 3 23:22:07.299869 containerd[1524]: time="2025-09-03T23:22:07.299825466Z" level=info msg="Start recovering state" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299920546Z" level=info msg="Start event monitor" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299935522Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299942282Z" level=info msg="Start streaming server" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299950215Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299958270Z" level=info msg="runtime interface starting up..." Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299964058Z" level=info msg="starting plugins..." Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.299977213Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.300143572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:22:07.301281 containerd[1524]: time="2025-09-03T23:22:07.300193237Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:22:07.300347 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:22:07.301536 containerd[1524]: time="2025-09-03T23:22:07.301516987Z" level=info msg="containerd successfully booted in 0.106318s" Sep 3 23:22:07.385717 tar[1507]: linux-arm64/LICENSE Sep 3 23:22:07.385914 tar[1507]: linux-arm64/README.md Sep 3 23:22:07.405445 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:22:08.045244 systemd-networkd[1437]: eth0: Gained IPv6LL Sep 3 23:22:08.049170 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:22:08.051807 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:22:08.055041 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 3 23:22:08.057879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:08.065315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:22:08.081972 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 3 23:22:08.082264 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 3 23:22:08.084284 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:22:08.097145 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:22:08.382503 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:22:08.404777 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:22:08.408217 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:22:08.439900 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:22:08.440201 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:22:08.445579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:22:08.474967 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:22:08.477937 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:22:08.480235 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:22:08.481438 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:22:08.653269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:08.654556 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:22:08.656014 systemd[1]: Startup finished in 2.032s (kernel) + 5.713s (initrd) + 3.393s (userspace) = 11.138s. Sep 3 23:22:08.657919 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:22:09.021932 kubelet[1612]: E0903 23:22:09.021873 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:22:09.024349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:22:09.024495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:22:09.024778 systemd[1]: kubelet.service: Consumed 764ms CPU time, 256.7M memory peak. Sep 3 23:22:12.680676 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:22:12.681848 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:47926.service - OpenSSH per-connection server daemon (10.0.0.1:47926). Sep 3 23:22:12.752232 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:12.753818 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:12.759486 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:22:12.760550 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:22:12.765861 systemd-logind[1496]: New session 1 of user core. Sep 3 23:22:12.785148 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:22:12.787569 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:22:12.802066 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:22:12.804090 systemd-logind[1496]: New session c1 of user core. Sep 3 23:22:12.905428 systemd[1629]: Queued start job for default target default.target. Sep 3 23:22:12.922165 systemd[1629]: Created slice app.slice - User Application Slice. Sep 3 23:22:12.922194 systemd[1629]: Reached target paths.target - Paths. Sep 3 23:22:12.922233 systemd[1629]: Reached target timers.target - Timers. Sep 3 23:22:12.923546 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:22:12.932377 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:22:12.932438 systemd[1629]: Reached target sockets.target - Sockets. Sep 3 23:22:12.932477 systemd[1629]: Reached target basic.target - Basic System. Sep 3 23:22:12.932505 systemd[1629]: Reached target default.target - Main User Target. Sep 3 23:22:12.932532 systemd[1629]: Startup finished in 122ms. Sep 3 23:22:12.932775 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:22:12.934316 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:22:12.992332 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:47946.service - OpenSSH per-connection server daemon (10.0.0.1:47946). Sep 3 23:22:13.053563 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 47946 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.055082 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.059171 systemd-logind[1496]: New session 2 of user core. Sep 3 23:22:13.068290 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:22:13.121968 sshd[1642]: Connection closed by 10.0.0.1 port 47946 Sep 3 23:22:13.122381 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:13.135184 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:47946.service: Deactivated successfully. Sep 3 23:22:13.137741 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:22:13.138773 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:22:13.140747 systemd-logind[1496]: Removed session 2. Sep 3 23:22:13.142417 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:47954.service - OpenSSH per-connection server daemon (10.0.0.1:47954). Sep 3 23:22:13.210223 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 47954 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.211474 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.217181 systemd-logind[1496]: New session 3 of user core. Sep 3 23:22:13.232296 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:22:13.281131 sshd[1650]: Connection closed by 10.0.0.1 port 47954 Sep 3 23:22:13.282152 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:13.299539 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:47954.service: Deactivated successfully. Sep 3 23:22:13.301340 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:22:13.303609 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:22:13.305802 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:47966.service - OpenSSH per-connection server daemon (10.0.0.1:47966). Sep 3 23:22:13.306374 systemd-logind[1496]: Removed session 3. Sep 3 23:22:13.355362 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 47966 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.356754 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.361415 systemd-logind[1496]: New session 4 of user core. Sep 3 23:22:13.375304 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:22:13.427709 sshd[1658]: Connection closed by 10.0.0.1 port 47966 Sep 3 23:22:13.428044 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:13.443083 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:47966.service: Deactivated successfully. Sep 3 23:22:13.444466 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:22:13.445665 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:22:13.447809 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Sep 3 23:22:13.448465 systemd-logind[1496]: Removed session 4. Sep 3 23:22:13.496313 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.497531 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.501659 systemd-logind[1496]: New session 5 of user core. Sep 3 23:22:13.511328 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:22:13.568790 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:22:13.569049 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:22:13.589814 sudo[1667]: pam_unix(sudo:session): session closed for user root Sep 3 23:22:13.591227 sshd[1666]: Connection closed by 10.0.0.1 port 47972 Sep 3 23:22:13.591945 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:13.610124 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:47972.service: Deactivated successfully. Sep 3 23:22:13.611452 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:22:13.612985 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:22:13.615950 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:47988.service - OpenSSH per-connection server daemon (10.0.0.1:47988). Sep 3 23:22:13.616973 systemd-logind[1496]: Removed session 5. Sep 3 23:22:13.657262 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 47988 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.658573 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.664501 systemd-logind[1496]: New session 6 of user core. Sep 3 23:22:13.674290 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:22:13.725906 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:22:13.726210 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:22:13.814805 sudo[1677]: pam_unix(sudo:session): session closed for user root Sep 3 23:22:13.819857 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:22:13.820107 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:22:13.836343 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:22:13.883051 augenrules[1699]: No rules Sep 3 23:22:13.884204 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:22:13.886164 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:22:13.887677 sudo[1676]: pam_unix(sudo:session): session closed for user root Sep 3 23:22:13.888938 sshd[1675]: Connection closed by 10.0.0.1 port 47988 Sep 3 23:22:13.889326 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:13.900071 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:47988.service: Deactivated successfully. Sep 3 23:22:13.901392 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:22:13.902252 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:22:13.904275 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:47994.service - OpenSSH per-connection server daemon (10.0.0.1:47994). Sep 3 23:22:13.912379 systemd-logind[1496]: Removed session 6. Sep 3 23:22:13.951804 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 47994 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:22:13.952954 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:22:13.957172 systemd-logind[1496]: New session 7 of user core. Sep 3 23:22:13.967265 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:22:14.018006 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:22:14.018788 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:22:14.339790 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:22:14.363506 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:22:14.596138 dockerd[1732]: time="2025-09-03T23:22:14.595971445Z" level=info msg="Starting up" Sep 3 23:22:14.597885 dockerd[1732]: time="2025-09-03T23:22:14.597732609Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:22:14.644710 systemd[1]: var-lib-docker-metacopy\x2dcheck2976410484-merged.mount: Deactivated successfully. Sep 3 23:22:14.653678 dockerd[1732]: time="2025-09-03T23:22:14.653630099Z" level=info msg="Loading containers: start." Sep 3 23:22:14.662136 kernel: Initializing XFRM netlink socket Sep 3 23:22:14.859309 systemd-networkd[1437]: docker0: Link UP Sep 3 23:22:14.862631 dockerd[1732]: time="2025-09-03T23:22:14.862588107Z" level=info msg="Loading containers: done." Sep 3 23:22:14.878886 dockerd[1732]: time="2025-09-03T23:22:14.878519555Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:22:14.878886 dockerd[1732]: time="2025-09-03T23:22:14.878621148Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:22:14.878886 dockerd[1732]: time="2025-09-03T23:22:14.878731462Z" level=info msg="Initializing buildkit" Sep 3 23:22:14.899797 dockerd[1732]: time="2025-09-03T23:22:14.899756557Z" level=info msg="Completed buildkit initialization" Sep 3 23:22:14.905773 dockerd[1732]: time="2025-09-03T23:22:14.905728615Z" level=info msg="Daemon has completed initialization" Sep 3 23:22:14.905998 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:22:14.906671 dockerd[1732]: time="2025-09-03T23:22:14.906526733Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:22:15.643929 containerd[1524]: time="2025-09-03T23:22:15.643876808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 3 23:22:16.288900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968064859.mount: Deactivated successfully. Sep 3 23:22:17.209303 containerd[1524]: time="2025-09-03T23:22:17.209249929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:17.210604 containerd[1524]: time="2025-09-03T23:22:17.210568490Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 3 23:22:17.212146 containerd[1524]: time="2025-09-03T23:22:17.211277305Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:17.214454 containerd[1524]: time="2025-09-03T23:22:17.213912059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:17.215042 containerd[1524]: time="2025-09-03T23:22:17.215003430Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.571080643s" Sep 3 23:22:17.215105 containerd[1524]: time="2025-09-03T23:22:17.215048290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 3 23:22:17.216422 containerd[1524]: time="2025-09-03T23:22:17.216392853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 3 23:22:18.307197 containerd[1524]: time="2025-09-03T23:22:18.307148686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:18.307808 containerd[1524]: time="2025-09-03T23:22:18.307769905Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 3 23:22:18.308579 containerd[1524]: time="2025-09-03T23:22:18.308549639Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:18.310958 containerd[1524]: time="2025-09-03T23:22:18.310926782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:18.312571 containerd[1524]: time="2025-09-03T23:22:18.312540557Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.096110511s" Sep 3 23:22:18.312674 containerd[1524]: time="2025-09-03T23:22:18.312659482Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 3 23:22:18.313270 containerd[1524]: time="2025-09-03T23:22:18.313250539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 3 23:22:19.249277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:22:19.252314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:19.427749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:19.442466 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:22:19.575200 containerd[1524]: time="2025-09-03T23:22:19.574845730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:19.576207 containerd[1524]: time="2025-09-03T23:22:19.576169660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 3 23:22:19.578133 containerd[1524]: time="2025-09-03T23:22:19.577240545Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:19.579691 containerd[1524]: time="2025-09-03T23:22:19.579661983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:19.583009 containerd[1524]: time="2025-09-03T23:22:19.582965012Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.269605739s" Sep 3 23:22:19.583009 containerd[1524]: time="2025-09-03T23:22:19.583008957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 3 23:22:19.583611 containerd[1524]: time="2025-09-03T23:22:19.583574672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 3 23:22:19.586913 kubelet[2014]: E0903 23:22:19.586878 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:22:19.590039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:22:19.590203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:22:19.591262 systemd[1]: kubelet.service: Consumed 155ms CPU time, 107.2M memory peak. Sep 3 23:22:20.570969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106582406.mount: Deactivated successfully. Sep 3 23:22:20.940582 containerd[1524]: time="2025-09-03T23:22:20.940154315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:20.940936 containerd[1524]: time="2025-09-03T23:22:20.940838188Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 3 23:22:20.941837 containerd[1524]: time="2025-09-03T23:22:20.941780083Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:20.943927 containerd[1524]: time="2025-09-03T23:22:20.943896999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:20.944497 containerd[1524]: time="2025-09-03T23:22:20.944468317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.360860247s" Sep 3 23:22:20.944557 containerd[1524]: time="2025-09-03T23:22:20.944500183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 3 23:22:20.945200 containerd[1524]: time="2025-09-03T23:22:20.945167943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:22:21.471011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735242774.mount: Deactivated successfully. Sep 3 23:22:22.153854 containerd[1524]: time="2025-09-03T23:22:22.153804717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:22.154609 containerd[1524]: time="2025-09-03T23:22:22.154574193Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 3 23:22:22.156135 containerd[1524]: time="2025-09-03T23:22:22.155769031Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:22.159089 containerd[1524]: time="2025-09-03T23:22:22.159049498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:22.160807 containerd[1524]: time="2025-09-03T23:22:22.160774066Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.215573699s" Sep 3 23:22:22.160869 containerd[1524]: time="2025-09-03T23:22:22.160812127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:22:22.161292 containerd[1524]: time="2025-09-03T23:22:22.161242378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:22:22.624636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228170418.mount: Deactivated successfully. Sep 3 23:22:22.630803 containerd[1524]: time="2025-09-03T23:22:22.630149220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:22:22.631552 containerd[1524]: time="2025-09-03T23:22:22.631526511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 3 23:22:22.632457 containerd[1524]: time="2025-09-03T23:22:22.632428800Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:22:22.635542 containerd[1524]: time="2025-09-03T23:22:22.635507863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:22:22.636478 containerd[1524]: time="2025-09-03T23:22:22.636079621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.802387ms" Sep 3 23:22:22.636818 containerd[1524]: time="2025-09-03T23:22:22.636797053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:22:22.637421 containerd[1524]: time="2025-09-03T23:22:22.637383354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 3 23:22:23.249103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883423295.mount: Deactivated successfully. Sep 3 23:22:24.794572 containerd[1524]: time="2025-09-03T23:22:24.794508905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:24.795308 containerd[1524]: time="2025-09-03T23:22:24.795036033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 3 23:22:24.796001 containerd[1524]: time="2025-09-03T23:22:24.795966257Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:24.798770 containerd[1524]: time="2025-09-03T23:22:24.798732578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:24.800058 containerd[1524]: time="2025-09-03T23:22:24.799899413Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.162480245s" Sep 3 23:22:24.800058 containerd[1524]: time="2025-09-03T23:22:24.799931292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 3 23:22:29.748825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:22:29.750359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:29.915225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:29.918774 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:22:29.957886 kubelet[2172]: E0903 23:22:29.957836 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:22:29.960459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:22:29.960578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:22:29.960989 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.6M memory peak. Sep 3 23:22:29.962916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:29.963057 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.6M memory peak. Sep 3 23:22:29.965669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:29.987467 systemd[1]: Reload requested from client PID 2187 ('systemctl') (unit session-7.scope)... Sep 3 23:22:29.987483 systemd[1]: Reloading... Sep 3 23:22:30.044392 zram_generator::config[2229]: No configuration found. Sep 3 23:22:30.188709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:22:30.273308 systemd[1]: Reloading finished in 285 ms. Sep 3 23:22:30.329698 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:22:30.329775 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:22:30.330040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:30.330085 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Sep 3 23:22:30.331673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:30.445436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:30.449944 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:22:30.485147 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:22:30.485147 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:22:30.485147 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:22:30.485486 kubelet[2274]: I0903 23:22:30.485197 2274 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:22:31.663600 kubelet[2274]: I0903 23:22:31.663551 2274 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:22:31.663600 kubelet[2274]: I0903 23:22:31.663588 2274 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:22:31.663948 kubelet[2274]: I0903 23:22:31.663821 2274 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:22:31.684659 kubelet[2274]: E0903 23:22:31.684617 2274 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:31.685687 kubelet[2274]: I0903 23:22:31.685670 2274 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:22:31.693959 kubelet[2274]: I0903 23:22:31.693753 2274 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:22:31.698194 kubelet[2274]: I0903 23:22:31.698166 2274 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:22:31.699950 kubelet[2274]: I0903 23:22:31.699103 2274 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:22:31.699950 kubelet[2274]: I0903 23:22:31.699269 2274 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:22:31.699950 kubelet[2274]: I0903 23:22:31.699301 2274 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:22:31.699950 kubelet[2274]: I0903 23:22:31.699531 2274 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:22:31.700211 kubelet[2274]: I0903 23:22:31.699540 2274 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:22:31.700211 kubelet[2274]: I0903 23:22:31.699771 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:22:31.701975 kubelet[2274]: I0903 23:22:31.701953 2274 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:22:31.702073 kubelet[2274]: I0903 23:22:31.702063 2274 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:22:31.702149 kubelet[2274]: I0903 23:22:31.702140 2274 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:22:31.703042 kubelet[2274]: I0903 23:22:31.703003 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:22:31.705049 kubelet[2274]: W0903 23:22:31.704998 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Sep 3 23:22:31.705139 kubelet[2274]: E0903 23:22:31.705067 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:31.705433 kubelet[2274]: W0903 23:22:31.705397 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Sep 3 23:22:31.705470 kubelet[2274]: E0903 23:22:31.705441 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:31.708635 kubelet[2274]: I0903 23:22:31.708551 2274 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:22:31.709291 kubelet[2274]: I0903 23:22:31.709271 2274 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:22:31.709399 kubelet[2274]: W0903 23:22:31.709386 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:22:31.712128 kubelet[2274]: I0903 23:22:31.710389 2274 server.go:1274] "Started kubelet" Sep 3 23:22:31.712128 kubelet[2274]: I0903 23:22:31.711104 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:22:31.712128 kubelet[2274]: I0903 23:22:31.711446 2274 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:22:31.712128 kubelet[2274]: I0903 23:22:31.711563 2274 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:22:31.712128 kubelet[2274]: I0903 23:22:31.712092 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:22:31.712923 kubelet[2274]: I0903 23:22:31.712902 2274 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:22:31.713206 kubelet[2274]: I0903 23:22:31.713176 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:22:31.714272 kubelet[2274]: I0903 23:22:31.714236 2274 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:22:31.714391 kubelet[2274]: I0903 23:22:31.714374 2274 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:22:31.714452 kubelet[2274]: I0903 23:22:31.714438 2274 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:22:31.715080 kubelet[2274]: W0903 23:22:31.714951 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Sep 3 23:22:31.715080 kubelet[2274]: E0903 23:22:31.715009 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:31.715161 kubelet[2274]: E0903 23:22:31.715101 2274 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:22:31.715251 kubelet[2274]: E0903 23:22:31.713936 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861e936e9bf9ecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:22:31.710359243 +0000 UTC m=+1.256609878,LastTimestamp:2025-09-03 23:22:31.710359243 +0000 UTC m=+1.256609878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:22:31.715362 kubelet[2274]: I0903 23:22:31.715295 2274 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:22:31.715520 kubelet[2274]: I0903 23:22:31.715496 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:22:31.715605 kubelet[2274]: E0903 23:22:31.715542 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:31.716019 kubelet[2274]: E0903 23:22:31.715967 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Sep 3 23:22:31.717294 kubelet[2274]: I0903 23:22:31.717262 2274 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:22:31.728362 kubelet[2274]: I0903 23:22:31.728318 2274 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:22:31.728362 kubelet[2274]: I0903 23:22:31.728343 2274 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:22:31.728362 kubelet[2274]: I0903 23:22:31.728362 2274 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:22:31.732629 kubelet[2274]: I0903 23:22:31.732573 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:22:31.733691 kubelet[2274]: I0903 23:22:31.733657 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:22:31.733691 kubelet[2274]: I0903 23:22:31.733683 2274 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:22:31.733801 kubelet[2274]: I0903 23:22:31.733710 2274 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:22:31.733801 kubelet[2274]: E0903 23:22:31.733757 2274 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:22:31.734596 kubelet[2274]: W0903 23:22:31.734549 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Sep 3 23:22:31.734671 kubelet[2274]: E0903 23:22:31.734602 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:31.816485 kubelet[2274]: E0903 23:22:31.816429 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:31.834633 kubelet[2274]: E0903 23:22:31.834586 2274 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:22:31.846668 kubelet[2274]: I0903 23:22:31.846628 2274 policy_none.go:49] "None policy: Start" Sep 3 23:22:31.847435 kubelet[2274]: I0903 23:22:31.847414 2274 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:22:31.847502 kubelet[2274]: I0903 23:22:31.847445 2274 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:22:31.858396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:22:31.875422 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:22:31.879206 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:22:31.899073 kubelet[2274]: I0903 23:22:31.899029 2274 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:22:31.899381 kubelet[2274]: I0903 23:22:31.899263 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:22:31.899381 kubelet[2274]: I0903 23:22:31.899283 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:22:31.899595 kubelet[2274]: I0903 23:22:31.899572 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:22:31.901681 kubelet[2274]: E0903 23:22:31.901655 2274 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 3 23:22:31.917646 kubelet[2274]: E0903 23:22:31.916969 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Sep 3 23:22:32.001114 kubelet[2274]: I0903 23:22:32.001081 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:22:32.001742 kubelet[2274]: E0903 23:22:32.001707 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Sep 3 23:22:32.042884 systemd[1]: Created slice kubepods-burstable-pod1c450ca92bf9da53fe3d570f0f411391.slice - libcontainer container kubepods-burstable-pod1c450ca92bf9da53fe3d570f0f411391.slice. Sep 3 23:22:32.066512 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 3 23:22:32.093124 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 3 23:22:32.116127 kubelet[2274]: I0903 23:22:32.116074 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:32.116416 kubelet[2274]: I0903 23:22:32.116258 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:32.116416 kubelet[2274]: I0903 23:22:32.116289 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:32.116416 kubelet[2274]: I0903 23:22:32.116307 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:32.116416 kubelet[2274]: I0903 23:22:32.116322 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:32.116416 kubelet[2274]: I0903 23:22:32.116338 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:32.116570 kubelet[2274]: I0903 23:22:32.116354 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:32.116570 kubelet[2274]: I0903 23:22:32.116368 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:32.116570 kubelet[2274]: I0903 23:22:32.116381 2274 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:22:32.203770 kubelet[2274]: I0903 23:22:32.203675 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:22:32.204216 kubelet[2274]: E0903 23:22:32.204157 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Sep 3 23:22:32.317551 kubelet[2274]: E0903 23:22:32.317491 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Sep 3 23:22:32.364127 kubelet[2274]: E0903 23:22:32.364025 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.364696 containerd[1524]: time="2025-09-03T23:22:32.364660350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c450ca92bf9da53fe3d570f0f411391,Namespace:kube-system,Attempt:0,}" Sep 3 23:22:32.381626 containerd[1524]: time="2025-09-03T23:22:32.381528723Z" level=info msg="connecting to shim d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db" address="unix:///run/containerd/s/de8acad6618a90c3e4ba4df310fe73af2c9985a8b03d212421de7647eb7d7d43" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:32.390864 kubelet[2274]: E0903 23:22:32.390555 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.391076 containerd[1524]: time="2025-09-03T23:22:32.391036984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 3 23:22:32.396040 kubelet[2274]: E0903 23:22:32.395772 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.397005 containerd[1524]: time="2025-09-03T23:22:32.396931036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 3 23:22:32.414315 systemd[1]: Started cri-containerd-d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db.scope - libcontainer container d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db. Sep 3 23:22:32.418402 containerd[1524]: time="2025-09-03T23:22:32.418356856Z" level=info msg="connecting to shim b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96" address="unix:///run/containerd/s/1d289f0ade365962af7a47510c45ace8ec319f3d6e3af7564172171986fbba3c" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:32.431801 containerd[1524]: time="2025-09-03T23:22:32.431744357Z" level=info msg="connecting to shim 8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582" address="unix:///run/containerd/s/57c89926b28ba71bac67fdb961657fb2fc900c8832d246a40a2c5b95ceecd525" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:32.445315 systemd[1]: Started cri-containerd-b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96.scope - libcontainer container b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96. Sep 3 23:22:32.452688 systemd[1]: Started cri-containerd-8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582.scope - libcontainer container 8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582. Sep 3 23:22:32.461980 containerd[1524]: time="2025-09-03T23:22:32.461872857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c450ca92bf9da53fe3d570f0f411391,Namespace:kube-system,Attempt:0,} returns sandbox id \"d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db\"" Sep 3 23:22:32.463146 kubelet[2274]: E0903 23:22:32.463118 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.465085 containerd[1524]: time="2025-09-03T23:22:32.465050761Z" level=info msg="CreateContainer within sandbox \"d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:22:32.473516 containerd[1524]: time="2025-09-03T23:22:32.473483687Z" level=info msg="Container f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:32.480094 containerd[1524]: time="2025-09-03T23:22:32.480040179Z" level=info msg="CreateContainer within sandbox \"d876c50a054f0d1f08845cef1d1e583b8debd2104741c585edf465b7f7f784db\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf\"" Sep 3 23:22:32.480678 containerd[1524]: time="2025-09-03T23:22:32.480646636Z" level=info msg="StartContainer for \"f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf\"" Sep 3 23:22:32.484741 containerd[1524]: time="2025-09-03T23:22:32.484707112Z" level=info msg="connecting to shim f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf" address="unix:///run/containerd/s/de8acad6618a90c3e4ba4df310fe73af2c9985a8b03d212421de7647eb7d7d43" protocol=ttrpc version=3 Sep 3 23:22:32.496535 containerd[1524]: time="2025-09-03T23:22:32.496485253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96\"" Sep 3 23:22:32.497203 kubelet[2274]: E0903 23:22:32.497180 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.499214 containerd[1524]: time="2025-09-03T23:22:32.499183154Z" level=info msg="CreateContainer within sandbox \"b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:22:32.507291 systemd[1]: Started cri-containerd-f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf.scope - libcontainer container f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf. Sep 3 23:22:32.509749 containerd[1524]: time="2025-09-03T23:22:32.509714927Z" level=info msg="Container 55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:32.510321 containerd[1524]: time="2025-09-03T23:22:32.510293012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582\"" Sep 3 23:22:32.511968 kubelet[2274]: E0903 23:22:32.511787 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.513322 containerd[1524]: time="2025-09-03T23:22:32.513281155Z" level=info msg="CreateContainer within sandbox \"8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:22:32.520431 containerd[1524]: time="2025-09-03T23:22:32.520369753Z" level=info msg="CreateContainer within sandbox \"b3d4857609f416ced3fa9cb8eccb21ce952446951b29ef3f0fbc5e0722c5da96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df\"" Sep 3 23:22:32.521370 containerd[1524]: time="2025-09-03T23:22:32.521343204Z" level=info msg="StartContainer for \"55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df\"" Sep 3 23:22:32.522404 containerd[1524]: time="2025-09-03T23:22:32.522380763Z" level=info msg="connecting to shim 55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df" address="unix:///run/containerd/s/1d289f0ade365962af7a47510c45ace8ec319f3d6e3af7564172171986fbba3c" protocol=ttrpc version=3 Sep 3 23:22:32.526722 containerd[1524]: time="2025-09-03T23:22:32.526685023Z" level=info msg="Container 9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:32.534137 containerd[1524]: time="2025-09-03T23:22:32.534069465Z" level=info msg="CreateContainer within sandbox \"8b6cdc12997569a9f40d6daeee7c0ad0e92ec24a9887081c3d4a804196a33582\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91\"" Sep 3 23:22:32.534529 containerd[1524]: time="2025-09-03T23:22:32.534510732Z" level=info msg="StartContainer for \"9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91\"" Sep 3 23:22:32.535544 containerd[1524]: time="2025-09-03T23:22:32.535498950Z" level=info msg="connecting to shim 9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91" address="unix:///run/containerd/s/57c89926b28ba71bac67fdb961657fb2fc900c8832d246a40a2c5b95ceecd525" protocol=ttrpc version=3 Sep 3 23:22:32.545306 systemd[1]: Started cri-containerd-55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df.scope - libcontainer container 55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df. Sep 3 23:22:32.550451 containerd[1524]: time="2025-09-03T23:22:32.549947460Z" level=info msg="StartContainer for \"f1ca0287be61641caa49d5de4527ff0e7b23c5e35a87f781280f16cd74dc2fcf\" returns successfully" Sep 3 23:22:32.564851 kubelet[2274]: W0903 23:22:32.564777 2274 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Sep 3 23:22:32.564851 kubelet[2274]: E0903 23:22:32.564850 2274 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:22:32.570301 systemd[1]: Started cri-containerd-9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91.scope - libcontainer container 9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91. Sep 3 23:22:32.601246 containerd[1524]: time="2025-09-03T23:22:32.601195730Z" level=info msg="StartContainer for \"55e91d2ab5b651856de4d4036870c9eb3bdf2ba23555f0cff6ab92ba1dbb12df\" returns successfully" Sep 3 23:22:32.606389 kubelet[2274]: I0903 23:22:32.606356 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:22:32.606712 kubelet[2274]: E0903 23:22:32.606680 2274 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Sep 3 23:22:32.615327 containerd[1524]: time="2025-09-03T23:22:32.615076800Z" level=info msg="StartContainer for \"9898d67fffbfc5492220a30d85375c9a09879cb9cf30cc92085057d8d7856e91\" returns successfully" Sep 3 23:22:32.743156 kubelet[2274]: E0903 23:22:32.742515 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.748508 kubelet[2274]: E0903 23:22:32.748482 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:32.751591 kubelet[2274]: E0903 23:22:32.751542 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:33.409616 kubelet[2274]: I0903 23:22:33.409559 2274 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:22:33.754397 kubelet[2274]: E0903 23:22:33.753992 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:33.754397 kubelet[2274]: E0903 23:22:33.754341 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:34.227901 kubelet[2274]: E0903 23:22:34.226865 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 3 23:22:34.319657 kubelet[2274]: I0903 23:22:34.319487 2274 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:22:34.319657 kubelet[2274]: E0903 23:22:34.319524 2274 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 3 23:22:34.331880 kubelet[2274]: E0903 23:22:34.331852 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:34.433003 kubelet[2274]: E0903 23:22:34.432962 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:34.533491 kubelet[2274]: E0903 23:22:34.533452 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:34.634059 kubelet[2274]: E0903 23:22:34.634023 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:34.734207 kubelet[2274]: E0903 23:22:34.734167 2274 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:34.755412 kubelet[2274]: E0903 23:22:34.755384 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:35.706357 kubelet[2274]: I0903 23:22:35.706327 2274 apiserver.go:52] "Watching apiserver" Sep 3 23:22:35.714938 kubelet[2274]: I0903 23:22:35.714908 2274 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:22:36.385351 systemd[1]: Reload requested from client PID 2548 ('systemctl') (unit session-7.scope)... Sep 3 23:22:36.385367 systemd[1]: Reloading... Sep 3 23:22:36.455141 zram_generator::config[2594]: No configuration found. Sep 3 23:22:36.595925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:22:36.695788 systemd[1]: Reloading finished in 310 ms. Sep 3 23:22:36.719008 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:36.739969 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:22:36.740267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:36.740325 systemd[1]: kubelet.service: Consumed 1.614s CPU time, 128.5M memory peak. Sep 3 23:22:36.741962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:22:36.909531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:22:36.913344 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:22:36.952572 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:22:36.952572 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 3 23:22:36.952572 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:22:36.952572 kubelet[2633]: I0903 23:22:36.952377 2633 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:22:36.959726 kubelet[2633]: I0903 23:22:36.959683 2633 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 3 23:22:36.959726 kubelet[2633]: I0903 23:22:36.959716 2633 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:22:36.959942 kubelet[2633]: I0903 23:22:36.959925 2633 server.go:934] "Client rotation is on, will bootstrap in background" Sep 3 23:22:36.961322 kubelet[2633]: I0903 23:22:36.961296 2633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:22:36.963250 kubelet[2633]: I0903 23:22:36.963227 2633 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:22:36.968594 kubelet[2633]: I0903 23:22:36.968570 2633 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:22:36.970996 kubelet[2633]: I0903 23:22:36.970966 2633 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:22:36.971217 kubelet[2633]: I0903 23:22:36.971199 2633 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 3 23:22:36.971359 kubelet[2633]: I0903 23:22:36.971323 2633 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:22:36.971533 kubelet[2633]: I0903 23:22:36.971361 2633 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:22:36.971533 kubelet[2633]: I0903 23:22:36.971529 2633 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:22:36.971624 kubelet[2633]: I0903 23:22:36.971538 2633 container_manager_linux.go:300] "Creating device plugin manager" Sep 3 23:22:36.971624 kubelet[2633]: I0903 23:22:36.971571 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:22:36.971679 kubelet[2633]: I0903 23:22:36.971667 2633 kubelet.go:408] "Attempting to sync node with API server" Sep 3 23:22:36.971702 kubelet[2633]: I0903 23:22:36.971682 2633 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:22:36.971702 kubelet[2633]: I0903 23:22:36.971699 2633 kubelet.go:314] "Adding apiserver pod source" Sep 3 23:22:36.971746 kubelet[2633]: I0903 23:22:36.971708 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:22:36.972580 kubelet[2633]: I0903 23:22:36.972334 2633 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:22:36.974502 kubelet[2633]: I0903 23:22:36.974481 2633 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:22:36.978281 kubelet[2633]: I0903 23:22:36.978249 2633 server.go:1274] "Started kubelet" Sep 3 23:22:36.981158 kubelet[2633]: I0903 23:22:36.980070 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:22:36.981158 kubelet[2633]: I0903 23:22:36.980353 2633 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:22:36.981158 kubelet[2633]: I0903 23:22:36.980409 2633 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:22:36.981575 kubelet[2633]: I0903 23:22:36.981555 2633 server.go:449] "Adding debug handlers to kubelet server" Sep 3 23:22:36.982766 kubelet[2633]: I0903 23:22:36.982732 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:22:36.988614 kubelet[2633]: E0903 23:22:36.988582 2633 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:22:36.989500 kubelet[2633]: I0903 23:22:36.989207 2633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:22:36.991402 kubelet[2633]: E0903 23:22:36.991363 2633 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:22:36.991508 kubelet[2633]: I0903 23:22:36.991490 2633 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 3 23:22:36.994205 kubelet[2633]: I0903 23:22:36.994188 2633 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:22:36.994483 kubelet[2633]: I0903 23:22:36.994446 2633 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 3 23:22:36.999916 kubelet[2633]: I0903 23:22:36.999883 2633 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:22:36.999916 kubelet[2633]: I0903 23:22:36.999906 2633 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:22:37.000029 kubelet[2633]: I0903 23:22:37.000013 2633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:22:37.004412 kubelet[2633]: I0903 23:22:37.004357 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:22:37.005341 kubelet[2633]: I0903 23:22:37.005312 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:22:37.005341 kubelet[2633]: I0903 23:22:37.005335 2633 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 3 23:22:37.005439 kubelet[2633]: I0903 23:22:37.005358 2633 kubelet.go:2321] "Starting kubelet main sync loop" Sep 3 23:22:37.005439 kubelet[2633]: E0903 23:22:37.005401 2633 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:22:37.029602 kubelet[2633]: I0903 23:22:37.029575 2633 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 3 23:22:37.029602 kubelet[2633]: I0903 23:22:37.029593 2633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 3 23:22:37.029742 kubelet[2633]: I0903 23:22:37.029641 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:22:37.029823 kubelet[2633]: I0903 23:22:37.029803 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:22:37.029870 kubelet[2633]: I0903 23:22:37.029822 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:22:37.029870 kubelet[2633]: I0903 23:22:37.029841 2633 policy_none.go:49] "None policy: Start" Sep 3 23:22:37.030377 kubelet[2633]: I0903 23:22:37.030361 2633 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 3 23:22:37.030420 kubelet[2633]: I0903 23:22:37.030385 2633 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:22:37.030543 kubelet[2633]: I0903 23:22:37.030526 2633 state_mem.go:75] "Updated machine memory state" Sep 3 23:22:37.039534 kubelet[2633]: I0903 23:22:37.039489 2633 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:22:37.039675 kubelet[2633]: I0903 23:22:37.039659 2633 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:22:37.039706 kubelet[2633]: I0903 23:22:37.039674 2633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:22:37.040353 kubelet[2633]: I0903 23:22:37.040332 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:22:37.145442 kubelet[2633]: I0903 23:22:37.145217 2633 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 3 23:22:37.150889 kubelet[2633]: I0903 23:22:37.150836 2633 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 3 23:22:37.150997 kubelet[2633]: I0903 23:22:37.150913 2633 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 3 23:22:37.295904 kubelet[2633]: I0903 23:22:37.295574 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:37.295904 kubelet[2633]: I0903 23:22:37.295907 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:37.295904 kubelet[2633]: I0903 23:22:37.295930 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:37.295904 kubelet[2633]: I0903 23:22:37.295951 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:37.295904 kubelet[2633]: I0903 23:22:37.295966 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:37.296783 kubelet[2633]: I0903 23:22:37.295980 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:37.296783 kubelet[2633]: I0903 23:22:37.295995 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:22:37.296783 kubelet[2633]: I0903 23:22:37.296010 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:22:37.296783 kubelet[2633]: I0903 23:22:37.296026 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c450ca92bf9da53fe3d570f0f411391-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c450ca92bf9da53fe3d570f0f411391\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:22:37.413741 kubelet[2633]: E0903 23:22:37.413650 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:37.413984 kubelet[2633]: E0903 23:22:37.413938 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:37.415223 kubelet[2633]: E0903 23:22:37.415185 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:37.972318 kubelet[2633]: I0903 23:22:37.972267 2633 apiserver.go:52] "Watching apiserver" Sep 3 23:22:37.994617 kubelet[2633]: I0903 23:22:37.994580 2633 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 3 23:22:38.020728 kubelet[2633]: E0903 23:22:38.020698 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:38.020827 kubelet[2633]: E0903 23:22:38.020706 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:38.022499 kubelet[2633]: E0903 23:22:38.022461 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:38.050287 kubelet[2633]: I0903 23:22:38.050208 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.050191473 podStartE2EDuration="1.050191473s" podCreationTimestamp="2025-09-03 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:22:38.049960789 +0000 UTC m=+1.133718492" watchObservedRunningTime="2025-09-03 23:22:38.050191473 +0000 UTC m=+1.133949176" Sep 3 23:22:38.050449 kubelet[2633]: I0903 23:22:38.050327 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.050321898 podStartE2EDuration="1.050321898s" podCreationTimestamp="2025-09-03 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:22:38.042130023 +0000 UTC m=+1.125887766" watchObservedRunningTime="2025-09-03 23:22:38.050321898 +0000 UTC m=+1.134079601" Sep 3 23:22:38.065741 kubelet[2633]: I0903 23:22:38.065660 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.065642086 podStartE2EDuration="1.065642086s" podCreationTimestamp="2025-09-03 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:22:38.057782474 +0000 UTC m=+1.141540177" watchObservedRunningTime="2025-09-03 23:22:38.065642086 +0000 UTC m=+1.149399789" Sep 3 23:22:39.022257 kubelet[2633]: E0903 23:22:39.021923 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:39.771179 kubelet[2633]: E0903 23:22:39.771146 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:40.765293 kubelet[2633]: E0903 23:22:40.765254 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:41.978252 kubelet[2633]: I0903 23:22:41.978202 2633 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:22:41.978637 containerd[1524]: time="2025-09-03T23:22:41.978514096Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:22:41.978800 kubelet[2633]: I0903 23:22:41.978736 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:22:42.922648 systemd[1]: Created slice kubepods-besteffort-pod31d38341_dfc4_4b0d_8412_3c33909fd4e2.slice - libcontainer container kubepods-besteffort-pod31d38341_dfc4_4b0d_8412_3c33909fd4e2.slice. Sep 3 23:22:42.936970 kubelet[2633]: I0903 23:22:42.936921 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31d38341-dfc4-4b0d-8412-3c33909fd4e2-xtables-lock\") pod \"kube-proxy-s8ddj\" (UID: \"31d38341-dfc4-4b0d-8412-3c33909fd4e2\") " pod="kube-system/kube-proxy-s8ddj" Sep 3 23:22:42.936970 kubelet[2633]: I0903 23:22:42.936973 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx5w\" (UniqueName: \"kubernetes.io/projected/31d38341-dfc4-4b0d-8412-3c33909fd4e2-kube-api-access-7mx5w\") pod \"kube-proxy-s8ddj\" (UID: \"31d38341-dfc4-4b0d-8412-3c33909fd4e2\") " pod="kube-system/kube-proxy-s8ddj" Sep 3 23:22:42.937132 kubelet[2633]: I0903 23:22:42.936995 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31d38341-dfc4-4b0d-8412-3c33909fd4e2-kube-proxy\") pod \"kube-proxy-s8ddj\" (UID: \"31d38341-dfc4-4b0d-8412-3c33909fd4e2\") " pod="kube-system/kube-proxy-s8ddj" Sep 3 23:22:42.937132 kubelet[2633]: I0903 23:22:42.937011 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31d38341-dfc4-4b0d-8412-3c33909fd4e2-lib-modules\") pod \"kube-proxy-s8ddj\" (UID: \"31d38341-dfc4-4b0d-8412-3c33909fd4e2\") " pod="kube-system/kube-proxy-s8ddj" Sep 3 23:22:43.045929 systemd[1]: Created slice kubepods-besteffort-podfbd7e741_5450_4a67_88b0_79974698acaf.slice - libcontainer container kubepods-besteffort-podfbd7e741_5450_4a67_88b0_79974698acaf.slice. Sep 3 23:22:43.137995 kubelet[2633]: I0903 23:22:43.137925 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fbd7e741-5450-4a67-88b0-79974698acaf-var-lib-calico\") pod \"tigera-operator-58fc44c59b-2jcql\" (UID: \"fbd7e741-5450-4a67-88b0-79974698acaf\") " pod="tigera-operator/tigera-operator-58fc44c59b-2jcql" Sep 3 23:22:43.137995 kubelet[2633]: I0903 23:22:43.137987 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9jq\" (UniqueName: \"kubernetes.io/projected/fbd7e741-5450-4a67-88b0-79974698acaf-kube-api-access-dv9jq\") pod \"tigera-operator-58fc44c59b-2jcql\" (UID: \"fbd7e741-5450-4a67-88b0-79974698acaf\") " pod="tigera-operator/tigera-operator-58fc44c59b-2jcql" Sep 3 23:22:43.231815 kubelet[2633]: E0903 23:22:43.231368 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:43.232648 containerd[1524]: time="2025-09-03T23:22:43.232215149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8ddj,Uid:31d38341-dfc4-4b0d-8412-3c33909fd4e2,Namespace:kube-system,Attempt:0,}" Sep 3 23:22:43.257176 containerd[1524]: time="2025-09-03T23:22:43.255816889Z" level=info msg="connecting to shim 9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0" address="unix:///run/containerd/s/e66277dffb7f00f0e344e8209a261fbd4d01fcf7c69b6ccf4f4117b7b8ac722a" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:43.276302 systemd[1]: Started cri-containerd-9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0.scope - libcontainer container 9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0. Sep 3 23:22:43.309044 containerd[1524]: time="2025-09-03T23:22:43.308984161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8ddj,Uid:31d38341-dfc4-4b0d-8412-3c33909fd4e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0\"" Sep 3 23:22:43.309719 kubelet[2633]: E0903 23:22:43.309693 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:43.311976 containerd[1524]: time="2025-09-03T23:22:43.311934594Z" level=info msg="CreateContainer within sandbox \"9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:22:43.352727 containerd[1524]: time="2025-09-03T23:22:43.352576459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2jcql,Uid:fbd7e741-5450-4a67-88b0-79974698acaf,Namespace:tigera-operator,Attempt:0,}" Sep 3 23:22:43.368910 containerd[1524]: time="2025-09-03T23:22:43.368875146Z" level=info msg="Container eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:43.380691 containerd[1524]: time="2025-09-03T23:22:43.380639112Z" level=info msg="CreateContainer within sandbox \"9053648bed919f3e444a59fbe0f6f5cf746a4cacb753ea7fda698ffc5f9719d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f\"" Sep 3 23:22:43.381240 containerd[1524]: time="2025-09-03T23:22:43.381213493Z" level=info msg="StartContainer for \"eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f\"" Sep 3 23:22:43.382684 containerd[1524]: time="2025-09-03T23:22:43.382595479Z" level=info msg="connecting to shim eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f" address="unix:///run/containerd/s/e66277dffb7f00f0e344e8209a261fbd4d01fcf7c69b6ccf4f4117b7b8ac722a" protocol=ttrpc version=3 Sep 3 23:22:43.389853 containerd[1524]: time="2025-09-03T23:22:43.389643306Z" level=info msg="connecting to shim cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2" address="unix:///run/containerd/s/f4bf092a113b8607b487ecb1854bb5f5fe7e888a79085e25d4e754860254f95b" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:43.410283 systemd[1]: Started cri-containerd-eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f.scope - libcontainer container eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f. Sep 3 23:22:43.414321 systemd[1]: Started cri-containerd-cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2.scope - libcontainer container cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2. Sep 3 23:22:43.452043 containerd[1524]: time="2025-09-03T23:22:43.451987390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-2jcql,Uid:fbd7e741-5450-4a67-88b0-79974698acaf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2\"" Sep 3 23:22:43.455853 containerd[1524]: time="2025-09-03T23:22:43.455820596Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 3 23:22:43.456843 containerd[1524]: time="2025-09-03T23:22:43.456025258Z" level=info msg="StartContainer for \"eb05ef8fc7e611a30867ad0f28e5d99143d6649cccd7221d0cdebd1e2ca0b28f\" returns successfully" Sep 3 23:22:44.034688 kubelet[2633]: E0903 23:22:44.034555 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:44.058933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665867023.mount: Deactivated successfully. Sep 3 23:22:44.406569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381502194.mount: Deactivated successfully. Sep 3 23:22:44.942496 containerd[1524]: time="2025-09-03T23:22:44.942451410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 3 23:22:44.945758 containerd[1524]: time="2025-09-03T23:22:44.945725339Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.489867459s" Sep 3 23:22:44.945758 containerd[1524]: time="2025-09-03T23:22:44.945758702Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 3 23:22:44.953839 containerd[1524]: time="2025-09-03T23:22:44.953799588Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:44.954505 containerd[1524]: time="2025-09-03T23:22:44.954475816Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:44.955040 containerd[1524]: time="2025-09-03T23:22:44.955009989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:44.958080 containerd[1524]: time="2025-09-03T23:22:44.958047534Z" level=info msg="CreateContainer within sandbox \"cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 3 23:22:44.964145 containerd[1524]: time="2025-09-03T23:22:44.963698900Z" level=info msg="Container 41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:44.968956 containerd[1524]: time="2025-09-03T23:22:44.968915063Z" level=info msg="CreateContainer within sandbox \"cce0df0345e2e5a43908196cd1da377d58b694a8ce29c6df1a3661ecca784db2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5\"" Sep 3 23:22:44.969427 containerd[1524]: time="2025-09-03T23:22:44.969388991Z" level=info msg="StartContainer for \"41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5\"" Sep 3 23:22:44.970067 containerd[1524]: time="2025-09-03T23:22:44.970032015Z" level=info msg="connecting to shim 41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5" address="unix:///run/containerd/s/f4bf092a113b8607b487ecb1854bb5f5fe7e888a79085e25d4e754860254f95b" protocol=ttrpc version=3 Sep 3 23:22:44.992268 systemd[1]: Started cri-containerd-41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5.scope - libcontainer container 41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5. Sep 3 23:22:45.018823 containerd[1524]: time="2025-09-03T23:22:45.018777727Z" level=info msg="StartContainer for \"41fc8a577552e2073127fa75b2ff45c439ddb40506524ed18da4eddf09db11e5\" returns successfully" Sep 3 23:22:45.045141 kubelet[2633]: I0903 23:22:45.044935 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s8ddj" podStartSLOduration=3.044918527 podStartE2EDuration="3.044918527s" podCreationTimestamp="2025-09-03 23:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:22:44.047260201 +0000 UTC m=+7.131017904" watchObservedRunningTime="2025-09-03 23:22:45.044918527 +0000 UTC m=+8.128676230" Sep 3 23:22:46.955640 kubelet[2633]: E0903 23:22:46.955561 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:46.984605 kubelet[2633]: I0903 23:22:46.984524 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-2jcql" podStartSLOduration=2.48241952 podStartE2EDuration="3.984508172s" podCreationTimestamp="2025-09-03 23:22:43 +0000 UTC" firstStartedPulling="2025-09-03 23:22:43.45444069 +0000 UTC m=+6.538198393" lastFinishedPulling="2025-09-03 23:22:44.956529342 +0000 UTC m=+8.040287045" observedRunningTime="2025-09-03 23:22:45.045565869 +0000 UTC m=+8.129323572" watchObservedRunningTime="2025-09-03 23:22:46.984508172 +0000 UTC m=+10.068265835" Sep 3 23:22:47.043387 kubelet[2633]: E0903 23:22:47.043293 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:49.795752 kubelet[2633]: E0903 23:22:49.795721 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:50.262559 sudo[1711]: pam_unix(sudo:session): session closed for user root Sep 3 23:22:50.264242 sshd[1710]: Connection closed by 10.0.0.1 port 47994 Sep 3 23:22:50.264630 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Sep 3 23:22:50.269772 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:47994.service: Deactivated successfully. Sep 3 23:22:50.272682 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:22:50.272855 systemd[1]: session-7.scope: Consumed 6.938s CPU time, 230.4M memory peak. Sep 3 23:22:50.275516 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:22:50.276846 systemd-logind[1496]: Removed session 7. Sep 3 23:22:50.773987 kubelet[2633]: E0903 23:22:50.773947 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:51.050630 kubelet[2633]: E0903 23:22:51.050522 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:52.519463 update_engine[1500]: I20250903 23:22:52.519397 1500 update_attempter.cc:509] Updating boot flags... Sep 3 23:22:53.838447 systemd[1]: Created slice kubepods-besteffort-pod28176ac9_c9f0_4d98_9af8_9a526e730c7a.slice - libcontainer container kubepods-besteffort-pod28176ac9_c9f0_4d98_9af8_9a526e730c7a.slice. Sep 3 23:22:53.921927 kubelet[2633]: I0903 23:22:53.921882 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28176ac9-c9f0-4d98-9af8-9a526e730c7a-tigera-ca-bundle\") pod \"calico-typha-657856b694-w9d87\" (UID: \"28176ac9-c9f0-4d98-9af8-9a526e730c7a\") " pod="calico-system/calico-typha-657856b694-w9d87" Sep 3 23:22:53.921927 kubelet[2633]: I0903 23:22:53.921930 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/28176ac9-c9f0-4d98-9af8-9a526e730c7a-typha-certs\") pod \"calico-typha-657856b694-w9d87\" (UID: \"28176ac9-c9f0-4d98-9af8-9a526e730c7a\") " pod="calico-system/calico-typha-657856b694-w9d87" Sep 3 23:22:53.922338 kubelet[2633]: I0903 23:22:53.921952 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcc4\" (UniqueName: \"kubernetes.io/projected/28176ac9-c9f0-4d98-9af8-9a526e730c7a-kube-api-access-hbcc4\") pod \"calico-typha-657856b694-w9d87\" (UID: \"28176ac9-c9f0-4d98-9af8-9a526e730c7a\") " pod="calico-system/calico-typha-657856b694-w9d87" Sep 3 23:22:54.145602 kubelet[2633]: E0903 23:22:54.145308 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:54.146125 containerd[1524]: time="2025-09-03T23:22:54.146031772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657856b694-w9d87,Uid:28176ac9-c9f0-4d98-9af8-9a526e730c7a,Namespace:calico-system,Attempt:0,}" Sep 3 23:22:54.183587 systemd[1]: Created slice kubepods-besteffort-podccd935d4_9e58_404d_bff1_cd4539808ffc.slice - libcontainer container kubepods-besteffort-podccd935d4_9e58_404d_bff1_cd4539808ffc.slice. Sep 3 23:22:54.195836 containerd[1524]: time="2025-09-03T23:22:54.195245665Z" level=info msg="connecting to shim 6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05" address="unix:///run/containerd/s/db6939305cda1e5781f814a22859dfb08d4c3c5d6c9d48ab6894d73e7848f81e" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:54.224966 kubelet[2633]: I0903 23:22:54.224786 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ccd935d4-9e58-404d-bff1-cd4539808ffc-node-certs\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.224966 kubelet[2633]: I0903 23:22:54.224838 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-var-lib-calico\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.224966 kubelet[2633]: I0903 23:22:54.224860 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccd935d4-9e58-404d-bff1-cd4539808ffc-tigera-ca-bundle\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.224966 kubelet[2633]: I0903 23:22:54.224912 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-cni-net-dir\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225173 kubelet[2633]: I0903 23:22:54.224995 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-cni-log-dir\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225173 kubelet[2633]: I0903 23:22:54.225025 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-flexvol-driver-host\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225173 kubelet[2633]: I0903 23:22:54.225047 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-policysync\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225173 kubelet[2633]: I0903 23:22:54.225063 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-var-run-calico\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225173 kubelet[2633]: I0903 23:22:54.225095 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-cni-bin-dir\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225274 kubelet[2633]: I0903 23:22:54.225156 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-lib-modules\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225274 kubelet[2633]: I0903 23:22:54.225193 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccd935d4-9e58-404d-bff1-cd4539808ffc-xtables-lock\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.225274 kubelet[2633]: I0903 23:22:54.225217 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdtw5\" (UniqueName: \"kubernetes.io/projected/ccd935d4-9e58-404d-bff1-cd4539808ffc-kube-api-access-vdtw5\") pod \"calico-node-snkxh\" (UID: \"ccd935d4-9e58-404d-bff1-cd4539808ffc\") " pod="calico-system/calico-node-snkxh" Sep 3 23:22:54.258304 systemd[1]: Started cri-containerd-6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05.scope - libcontainer container 6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05. Sep 3 23:22:54.297583 containerd[1524]: time="2025-09-03T23:22:54.297538800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-657856b694-w9d87,Uid:28176ac9-c9f0-4d98-9af8-9a526e730c7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05\"" Sep 3 23:22:54.299336 kubelet[2633]: E0903 23:22:54.299311 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:54.302210 containerd[1524]: time="2025-09-03T23:22:54.301999946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 3 23:22:54.328213 kubelet[2633]: E0903 23:22:54.328155 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.328213 kubelet[2633]: W0903 23:22:54.328179 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.328213 kubelet[2633]: E0903 23:22:54.328200 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.330604 kubelet[2633]: E0903 23:22:54.330585 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.330780 kubelet[2633]: W0903 23:22:54.330665 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.330780 kubelet[2633]: E0903 23:22:54.330683 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.338522 kubelet[2633]: E0903 23:22:54.338504 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.338522 kubelet[2633]: W0903 23:22:54.338520 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.338618 kubelet[2633]: E0903 23:22:54.338532 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.456177 kubelet[2633]: E0903 23:22:54.455583 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:22:54.489544 containerd[1524]: time="2025-09-03T23:22:54.489421354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snkxh,Uid:ccd935d4-9e58-404d-bff1-cd4539808ffc,Namespace:calico-system,Attempt:0,}" Sep 3 23:22:54.509557 kubelet[2633]: E0903 23:22:54.509532 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.509557 kubelet[2633]: W0903 23:22:54.509552 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.509729 kubelet[2633]: E0903 23:22:54.509570 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.509771 kubelet[2633]: E0903 23:22:54.509743 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.509771 kubelet[2633]: W0903 23:22:54.509751 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.509771 kubelet[2633]: E0903 23:22:54.509759 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.509918 kubelet[2633]: E0903 23:22:54.509907 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.509918 kubelet[2633]: W0903 23:22:54.509917 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.509971 kubelet[2633]: E0903 23:22:54.509925 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510091 kubelet[2633]: E0903 23:22:54.510080 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510091 kubelet[2633]: W0903 23:22:54.510091 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510189 kubelet[2633]: E0903 23:22:54.510099 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510249 kubelet[2633]: E0903 23:22:54.510236 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510249 kubelet[2633]: W0903 23:22:54.510247 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510305 kubelet[2633]: E0903 23:22:54.510266 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510389 kubelet[2633]: E0903 23:22:54.510379 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510389 kubelet[2633]: W0903 23:22:54.510388 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510454 kubelet[2633]: E0903 23:22:54.510396 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510526 kubelet[2633]: E0903 23:22:54.510515 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510526 kubelet[2633]: W0903 23:22:54.510525 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510591 kubelet[2633]: E0903 23:22:54.510542 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510671 kubelet[2633]: E0903 23:22:54.510658 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510671 kubelet[2633]: W0903 23:22:54.510668 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510722 kubelet[2633]: E0903 23:22:54.510676 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.510887 kubelet[2633]: E0903 23:22:54.510870 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.510887 kubelet[2633]: W0903 23:22:54.510879 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.510887 kubelet[2633]: E0903 23:22:54.510887 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.511172 kubelet[2633]: E0903 23:22:54.511157 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.511172 kubelet[2633]: W0903 23:22:54.511172 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.511231 kubelet[2633]: E0903 23:22:54.511182 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.511701 kubelet[2633]: E0903 23:22:54.511686 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.511701 kubelet[2633]: W0903 23:22:54.511700 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.511802 kubelet[2633]: E0903 23:22:54.511711 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.511894 kubelet[2633]: E0903 23:22:54.511881 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.511894 kubelet[2633]: W0903 23:22:54.511893 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.511957 kubelet[2633]: E0903 23:22:54.511901 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.512381 kubelet[2633]: E0903 23:22:54.512367 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.512426 kubelet[2633]: W0903 23:22:54.512384 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.512426 kubelet[2633]: E0903 23:22:54.512394 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.512545 kubelet[2633]: E0903 23:22:54.512535 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.512545 kubelet[2633]: W0903 23:22:54.512545 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.512597 kubelet[2633]: E0903 23:22:54.512552 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.512696 kubelet[2633]: E0903 23:22:54.512686 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.512696 kubelet[2633]: W0903 23:22:54.512696 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.512755 kubelet[2633]: E0903 23:22:54.512703 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.512849 kubelet[2633]: E0903 23:22:54.512839 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.512849 kubelet[2633]: W0903 23:22:54.512849 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.512896 kubelet[2633]: E0903 23:22:54.512857 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.513079 kubelet[2633]: E0903 23:22:54.513065 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.513079 kubelet[2633]: W0903 23:22:54.513079 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.513164 kubelet[2633]: E0903 23:22:54.513087 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.513242 kubelet[2633]: E0903 23:22:54.513232 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.513272 kubelet[2633]: W0903 23:22:54.513242 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.513272 kubelet[2633]: E0903 23:22:54.513260 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.513388 kubelet[2633]: E0903 23:22:54.513379 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.513419 kubelet[2633]: W0903 23:22:54.513389 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.513419 kubelet[2633]: E0903 23:22:54.513399 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.513596 kubelet[2633]: E0903 23:22:54.513535 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.513596 kubelet[2633]: W0903 23:22:54.513553 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.513596 kubelet[2633]: E0903 23:22:54.513561 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.513707 containerd[1524]: time="2025-09-03T23:22:54.513671999Z" level=info msg="connecting to shim ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763" address="unix:///run/containerd/s/14fdfe3b0973dd6d667801a3a5a1f2ae11a7ae8c6ada1eecdcf44a8d14f65889" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:22:54.528197 kubelet[2633]: E0903 23:22:54.528031 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.528197 kubelet[2633]: W0903 23:22:54.528047 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.528197 kubelet[2633]: E0903 23:22:54.528064 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.528197 kubelet[2633]: I0903 23:22:54.528090 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6f905e39-f56b-4766-b50d-574df013d5be-registration-dir\") pod \"csi-node-driver-zw6hs\" (UID: \"6f905e39-f56b-4766-b50d-574df013d5be\") " pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:22:54.528624 kubelet[2633]: E0903 23:22:54.528549 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.528624 kubelet[2633]: W0903 23:22:54.528597 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.528806 kubelet[2633]: E0903 23:22:54.528689 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.529971 kubelet[2633]: I0903 23:22:54.529948 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f905e39-f56b-4766-b50d-574df013d5be-kubelet-dir\") pod \"csi-node-driver-zw6hs\" (UID: \"6f905e39-f56b-4766-b50d-574df013d5be\") " pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:22:54.530334 kubelet[2633]: E0903 23:22:54.530087 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.530334 kubelet[2633]: W0903 23:22:54.530222 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.530334 kubelet[2633]: E0903 23:22:54.530234 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.530760 kubelet[2633]: E0903 23:22:54.530667 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.530760 kubelet[2633]: W0903 23:22:54.530681 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.530760 kubelet[2633]: E0903 23:22:54.530741 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.531155 kubelet[2633]: E0903 23:22:54.531136 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.531356 kubelet[2633]: W0903 23:22:54.531262 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.531356 kubelet[2633]: E0903 23:22:54.531341 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.531746 kubelet[2633]: E0903 23:22:54.531690 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.531871 kubelet[2633]: W0903 23:22:54.531839 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.532187 kubelet[2633]: E0903 23:22:54.531979 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.532323 kubelet[2633]: E0903 23:22:54.532307 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.532431 kubelet[2633]: W0903 23:22:54.532417 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.532560 kubelet[2633]: E0903 23:22:54.532512 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.532560 kubelet[2633]: I0903 23:22:54.532400 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6f905e39-f56b-4766-b50d-574df013d5be-socket-dir\") pod \"csi-node-driver-zw6hs\" (UID: \"6f905e39-f56b-4766-b50d-574df013d5be\") " pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:22:54.532902 kubelet[2633]: E0903 23:22:54.532796 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.532902 kubelet[2633]: W0903 23:22:54.532808 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.532902 kubelet[2633]: E0903 23:22:54.532831 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.533038 kubelet[2633]: E0903 23:22:54.533026 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.533103 kubelet[2633]: W0903 23:22:54.533093 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.533187 kubelet[2633]: E0903 23:22:54.533175 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.533397 kubelet[2633]: E0903 23:22:54.533384 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.533558 kubelet[2633]: W0903 23:22:54.533445 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.533558 kubelet[2633]: E0903 23:22:54.533460 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.533558 kubelet[2633]: I0903 23:22:54.533525 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbqs\" (UniqueName: \"kubernetes.io/projected/6f905e39-f56b-4766-b50d-574df013d5be-kube-api-access-prbqs\") pod \"csi-node-driver-zw6hs\" (UID: \"6f905e39-f56b-4766-b50d-574df013d5be\") " pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:22:54.533837 kubelet[2633]: E0903 23:22:54.533681 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.533837 kubelet[2633]: W0903 23:22:54.533695 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.533837 kubelet[2633]: E0903 23:22:54.533713 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.534509 kubelet[2633]: E0903 23:22:54.533936 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.534509 kubelet[2633]: W0903 23:22:54.533945 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.534509 kubelet[2633]: E0903 23:22:54.534182 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.535121 kubelet[2633]: E0903 23:22:54.534863 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.535121 kubelet[2633]: W0903 23:22:54.535071 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.535121 kubelet[2633]: E0903 23:22:54.535088 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.535284 kubelet[2633]: I0903 23:22:54.535267 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6f905e39-f56b-4766-b50d-574df013d5be-varrun\") pod \"csi-node-driver-zw6hs\" (UID: \"6f905e39-f56b-4766-b50d-574df013d5be\") " pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:22:54.535411 kubelet[2633]: E0903 23:22:54.535387 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.535440 kubelet[2633]: W0903 23:22:54.535423 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.535440 kubelet[2633]: E0903 23:22:54.535438 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.535572 kubelet[2633]: E0903 23:22:54.535559 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.535572 kubelet[2633]: W0903 23:22:54.535570 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.535619 kubelet[2633]: E0903 23:22:54.535578 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.540260 systemd[1]: Started cri-containerd-ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763.scope - libcontainer container ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763. Sep 3 23:22:54.572735 containerd[1524]: time="2025-09-03T23:22:54.572690796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snkxh,Uid:ccd935d4-9e58-404d-bff1-cd4539808ffc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\"" Sep 3 23:22:54.636519 kubelet[2633]: E0903 23:22:54.636481 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.636519 kubelet[2633]: W0903 23:22:54.636504 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.636519 kubelet[2633]: E0903 23:22:54.636521 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.636967 kubelet[2633]: E0903 23:22:54.636787 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.636967 kubelet[2633]: W0903 23:22:54.636795 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.636967 kubelet[2633]: E0903 23:22:54.636809 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.637032 kubelet[2633]: E0903 23:22:54.636983 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.637032 kubelet[2633]: W0903 23:22:54.636992 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.637032 kubelet[2633]: E0903 23:22:54.637001 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.637323 kubelet[2633]: E0903 23:22:54.637155 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.637323 kubelet[2633]: W0903 23:22:54.637167 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.637323 kubelet[2633]: E0903 23:22:54.637180 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.637323 kubelet[2633]: E0903 23:22:54.637314 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.637323 kubelet[2633]: W0903 23:22:54.637321 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.637436 kubelet[2633]: E0903 23:22:54.637333 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637487 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.638031 kubelet[2633]: W0903 23:22:54.637498 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637511 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637636 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.638031 kubelet[2633]: W0903 23:22:54.637643 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637655 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637789 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.638031 kubelet[2633]: W0903 23:22:54.637795 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.638031 kubelet[2633]: E0903 23:22:54.637819 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.639032 kubelet[2633]: E0903 23:22:54.638098 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.639032 kubelet[2633]: W0903 23:22:54.638124 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.639032 kubelet[2633]: E0903 23:22:54.638157 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.639032 kubelet[2633]: E0903 23:22:54.638525 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.639032 kubelet[2633]: W0903 23:22:54.638536 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.639032 kubelet[2633]: E0903 23:22:54.638568 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.639032 kubelet[2633]: E0903 23:22:54.638946 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.639032 kubelet[2633]: W0903 23:22:54.638957 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.639542 kubelet[2633]: E0903 23:22:54.639233 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.639542 kubelet[2633]: E0903 23:22:54.639410 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.639542 kubelet[2633]: W0903 23:22:54.639422 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.639542 kubelet[2633]: E0903 23:22:54.639452 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642171 kubelet[2633]: E0903 23:22:54.641233 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642171 kubelet[2633]: W0903 23:22:54.641248 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642171 kubelet[2633]: E0903 23:22:54.641386 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642171 kubelet[2633]: W0903 23:22:54.641394 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642171 kubelet[2633]: E0903 23:22:54.641483 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642171 kubelet[2633]: W0903 23:22:54.641489 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642171 kubelet[2633]: E0903 23:22:54.641576 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642171 kubelet[2633]: W0903 23:22:54.641582 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642171 kubelet[2633]: E0903 23:22:54.641665 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642171 kubelet[2633]: W0903 23:22:54.641670 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641679 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641776 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642420 kubelet[2633]: W0903 23:22:54.641782 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641791 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641930 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642420 kubelet[2633]: W0903 23:22:54.641943 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641951 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.641973 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642420 kubelet[2633]: E0903 23:22:54.642088 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642420 kubelet[2633]: W0903 23:22:54.642095 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642102 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642243 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642859 kubelet[2633]: W0903 23:22:54.642250 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642257 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642271 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642379 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.642859 kubelet[2633]: W0903 23:22:54.642385 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642392 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642676 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.642859 kubelet[2633]: E0903 23:22:54.642806 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.643122 kubelet[2633]: E0903 23:22:54.642898 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.643122 kubelet[2633]: W0903 23:22:54.642908 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.643122 kubelet[2633]: E0903 23:22:54.642927 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.643122 kubelet[2633]: E0903 23:22:54.643103 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.643122 kubelet[2633]: W0903 23:22:54.643122 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.643270 kubelet[2633]: E0903 23:22:54.643137 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.644539 kubelet[2633]: E0903 23:22:54.643377 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.644539 kubelet[2633]: W0903 23:22:54.643394 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.644539 kubelet[2633]: E0903 23:22:54.643404 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:54.664905 kubelet[2633]: E0903 23:22:54.664880 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:54.664905 kubelet[2633]: W0903 23:22:54.664901 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:54.665065 kubelet[2633]: E0903 23:22:54.664919 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:55.283015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580662806.mount: Deactivated successfully. Sep 3 23:22:56.006475 kubelet[2633]: E0903 23:22:56.006426 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:22:56.018336 containerd[1524]: time="2025-09-03T23:22:56.018280881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:56.018892 containerd[1524]: time="2025-09-03T23:22:56.018859272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 3 23:22:56.035341 containerd[1524]: time="2025-09-03T23:22:56.035280202Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:56.036387 containerd[1524]: time="2025-09-03T23:22:56.036290336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.734247068s" Sep 3 23:22:56.036387 containerd[1524]: time="2025-09-03T23:22:56.036328178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 3 23:22:56.037220 containerd[1524]: time="2025-09-03T23:22:56.037184825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:56.038228 containerd[1524]: time="2025-09-03T23:22:56.038134276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 3 23:22:56.051851 containerd[1524]: time="2025-09-03T23:22:56.051809057Z" level=info msg="CreateContainer within sandbox \"6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 3 23:22:56.058148 containerd[1524]: time="2025-09-03T23:22:56.057993312Z" level=info msg="Container a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:56.065833 containerd[1524]: time="2025-09-03T23:22:56.065792094Z" level=info msg="CreateContainer within sandbox \"6d55a9339a93c120db68c840b33126e22a77393c85913b76a64ef22c4b76df05\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132\"" Sep 3 23:22:56.066670 containerd[1524]: time="2025-09-03T23:22:56.066636500Z" level=info msg="StartContainer for \"a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132\"" Sep 3 23:22:56.068549 containerd[1524]: time="2025-09-03T23:22:56.068379954Z" level=info msg="connecting to shim a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132" address="unix:///run/containerd/s/db6939305cda1e5781f814a22859dfb08d4c3c5d6c9d48ab6894d73e7848f81e" protocol=ttrpc version=3 Sep 3 23:22:56.094336 systemd[1]: Started cri-containerd-a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132.scope - libcontainer container a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132. Sep 3 23:22:56.134931 containerd[1524]: time="2025-09-03T23:22:56.134877196Z" level=info msg="StartContainer for \"a81b5f3e4ed566f59b50ee2195534938a65116d1a11311192a5100da47449132\" returns successfully" Sep 3 23:22:57.057744 containerd[1524]: time="2025-09-03T23:22:57.057698153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:57.058429 containerd[1524]: time="2025-09-03T23:22:57.058401910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 3 23:22:57.059299 containerd[1524]: time="2025-09-03T23:22:57.059252514Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:57.061501 containerd[1524]: time="2025-09-03T23:22:57.061472188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:22:57.061982 containerd[1524]: time="2025-09-03T23:22:57.061942253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.023766414s" Sep 3 23:22:57.062038 containerd[1524]: time="2025-09-03T23:22:57.061981735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 3 23:22:57.064304 containerd[1524]: time="2025-09-03T23:22:57.064274013Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 3 23:22:57.071255 containerd[1524]: time="2025-09-03T23:22:57.071215052Z" level=info msg="Container 7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:22:57.078690 containerd[1524]: time="2025-09-03T23:22:57.078635196Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\"" Sep 3 23:22:57.080364 containerd[1524]: time="2025-09-03T23:22:57.080330963Z" level=info msg="StartContainer for \"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\"" Sep 3 23:22:57.081909 containerd[1524]: time="2025-09-03T23:22:57.081881603Z" level=info msg="connecting to shim 7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10" address="unix:///run/containerd/s/14fdfe3b0973dd6d667801a3a5a1f2ae11a7ae8c6ada1eecdcf44a8d14f65889" protocol=ttrpc version=3 Sep 3 23:22:57.101723 kubelet[2633]: E0903 23:22:57.101660 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:57.104412 systemd[1]: Started cri-containerd-7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10.scope - libcontainer container 7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10. Sep 3 23:22:57.134633 kubelet[2633]: E0903 23:22:57.134603 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.134759 kubelet[2633]: W0903 23:22:57.134720 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.134759 kubelet[2633]: E0903 23:22:57.134746 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.135142 kubelet[2633]: E0903 23:22:57.135123 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.135142 kubelet[2633]: W0903 23:22:57.135139 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.135222 kubelet[2633]: E0903 23:22:57.135151 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.135457 kubelet[2633]: E0903 23:22:57.135441 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.135457 kubelet[2633]: W0903 23:22:57.135454 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.135510 kubelet[2633]: E0903 23:22:57.135466 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.135783 kubelet[2633]: E0903 23:22:57.135747 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.135815 kubelet[2633]: W0903 23:22:57.135782 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.135868 kubelet[2633]: E0903 23:22:57.135794 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.136238 kubelet[2633]: E0903 23:22:57.136221 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.136238 kubelet[2633]: W0903 23:22:57.136236 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.136340 kubelet[2633]: E0903 23:22:57.136247 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.137075 kubelet[2633]: E0903 23:22:57.137057 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.137075 kubelet[2633]: W0903 23:22:57.137074 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.137205 kubelet[2633]: E0903 23:22:57.137168 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.137484 kubelet[2633]: E0903 23:22:57.137467 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.137511 kubelet[2633]: W0903 23:22:57.137481 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.137511 kubelet[2633]: E0903 23:22:57.137497 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.137780 kubelet[2633]: E0903 23:22:57.137724 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.137823 kubelet[2633]: W0903 23:22:57.137781 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.137823 kubelet[2633]: E0903 23:22:57.137795 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.138115 kubelet[2633]: E0903 23:22:57.138085 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.138115 kubelet[2633]: W0903 23:22:57.138100 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.138221 kubelet[2633]: E0903 23:22:57.138204 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.138716 kubelet[2633]: E0903 23:22:57.138699 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.138716 kubelet[2633]: W0903 23:22:57.138713 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.138781 kubelet[2633]: E0903 23:22:57.138736 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.138950 kubelet[2633]: E0903 23:22:57.138934 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.138950 kubelet[2633]: W0903 23:22:57.138948 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.139030 kubelet[2633]: E0903 23:22:57.138957 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.139510 kubelet[2633]: E0903 23:22:57.139143 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.139510 kubelet[2633]: W0903 23:22:57.139155 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.139510 kubelet[2633]: E0903 23:22:57.139164 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.139656 kubelet[2633]: E0903 23:22:57.139586 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.139656 kubelet[2633]: W0903 23:22:57.139608 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.139656 kubelet[2633]: E0903 23:22:57.139618 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.139813 kubelet[2633]: E0903 23:22:57.139787 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.139813 kubelet[2633]: W0903 23:22:57.139799 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.139813 kubelet[2633]: E0903 23:22:57.139808 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.140632 kubelet[2633]: E0903 23:22:57.140233 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.140632 kubelet[2633]: W0903 23:22:57.140247 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.140632 kubelet[2633]: E0903 23:22:57.140257 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.152393 kubelet[2633]: E0903 23:22:57.152365 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.152393 kubelet[2633]: W0903 23:22:57.152388 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.152393 kubelet[2633]: E0903 23:22:57.152407 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.152569 kubelet[2633]: E0903 23:22:57.152561 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.152569 kubelet[2633]: W0903 23:22:57.152568 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.152623 kubelet[2633]: E0903 23:22:57.152581 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.152795 kubelet[2633]: E0903 23:22:57.152783 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.152795 kubelet[2633]: W0903 23:22:57.152793 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.152869 kubelet[2633]: E0903 23:22:57.152805 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.152998 kubelet[2633]: E0903 23:22:57.152984 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.152998 kubelet[2633]: W0903 23:22:57.152996 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.153062 kubelet[2633]: E0903 23:22:57.153009 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.153323 kubelet[2633]: E0903 23:22:57.153299 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.153366 kubelet[2633]: W0903 23:22:57.153324 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.153366 kubelet[2633]: E0903 23:22:57.153349 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.153941 kubelet[2633]: E0903 23:22:57.153921 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.153991 kubelet[2633]: W0903 23:22:57.153943 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.153991 kubelet[2633]: E0903 23:22:57.153973 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.154191 kubelet[2633]: E0903 23:22:57.154175 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.154191 kubelet[2633]: W0903 23:22:57.154191 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.154271 kubelet[2633]: E0903 23:22:57.154247 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.154367 kubelet[2633]: E0903 23:22:57.154353 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.154397 kubelet[2633]: W0903 23:22:57.154368 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.154420 kubelet[2633]: E0903 23:22:57.154395 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.154514 kubelet[2633]: E0903 23:22:57.154500 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.154541 kubelet[2633]: W0903 23:22:57.154514 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.154564 kubelet[2633]: E0903 23:22:57.154544 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.154730 kubelet[2633]: E0903 23:22:57.154713 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.154730 kubelet[2633]: W0903 23:22:57.154728 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.154800 kubelet[2633]: E0903 23:22:57.154747 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.155086 kubelet[2633]: E0903 23:22:57.155069 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.155086 kubelet[2633]: W0903 23:22:57.155086 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.155220 kubelet[2633]: E0903 23:22:57.155100 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.155389 kubelet[2633]: E0903 23:22:57.155373 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.155389 kubelet[2633]: W0903 23:22:57.155388 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.155457 kubelet[2633]: E0903 23:22:57.155405 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.155653 kubelet[2633]: E0903 23:22:57.155601 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.155653 kubelet[2633]: W0903 23:22:57.155615 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.155653 kubelet[2633]: E0903 23:22:57.155628 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.155983 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.158250 kubelet[2633]: W0903 23:22:57.155996 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156014 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156294 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.158250 kubelet[2633]: W0903 23:22:57.156334 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156493 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156536 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.158250 kubelet[2633]: W0903 23:22:57.156546 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156797 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.158250 kubelet[2633]: E0903 23:22:57.156926 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.158559 containerd[1524]: time="2025-09-03T23:22:57.156000195Z" level=info msg="StartContainer for \"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\" returns successfully" Sep 3 23:22:57.158607 kubelet[2633]: W0903 23:22:57.156937 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.158607 kubelet[2633]: E0903 23:22:57.156948 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.158607 kubelet[2633]: E0903 23:22:57.157265 2633 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:22:57.158607 kubelet[2633]: W0903 23:22:57.157281 2633 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:22:57.158607 kubelet[2633]: E0903 23:22:57.157320 2633 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:22:57.166947 systemd[1]: cri-containerd-7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10.scope: Deactivated successfully. Sep 3 23:22:57.167272 systemd[1]: cri-containerd-7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10.scope: Consumed 32ms CPU time, 6.1M memory peak, 4.1M written to disk. Sep 3 23:22:57.186010 containerd[1524]: time="2025-09-03T23:22:57.185930022Z" level=info msg="received exit event container_id:\"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\" id:\"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\" pid:3294 exited_at:{seconds:1756941777 nanos:175051180}" Sep 3 23:22:57.196273 containerd[1524]: time="2025-09-03T23:22:57.196221754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\" id:\"7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10\" pid:3294 exited_at:{seconds:1756941777 nanos:175051180}" Sep 3 23:22:57.222673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b7c65ead83c8e31662d77bcf03ac82044ceff8aa7553cbc793705bf0c94ed10-rootfs.mount: Deactivated successfully. Sep 3 23:22:58.006436 kubelet[2633]: E0903 23:22:58.006232 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:22:58.105087 kubelet[2633]: I0903 23:22:58.105056 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:22:58.105435 kubelet[2633]: E0903 23:22:58.105358 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:22:58.106824 containerd[1524]: time="2025-09-03T23:22:58.106789909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 3 23:22:58.123452 kubelet[2633]: I0903 23:22:58.123260 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-657856b694-w9d87" podStartSLOduration=3.385544455 podStartE2EDuration="5.123241641s" podCreationTimestamp="2025-09-03 23:22:53 +0000 UTC" firstStartedPulling="2025-09-03 23:22:54.299792015 +0000 UTC m=+17.383549718" lastFinishedPulling="2025-09-03 23:22:56.037489201 +0000 UTC m=+19.121246904" observedRunningTime="2025-09-03 23:22:57.115897282 +0000 UTC m=+20.199655105" watchObservedRunningTime="2025-09-03 23:22:58.123241641 +0000 UTC m=+21.206999304" Sep 3 23:23:00.006840 kubelet[2633]: E0903 23:23:00.006476 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:23:01.368746 containerd[1524]: time="2025-09-03T23:23:01.368692214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:01.369821 containerd[1524]: time="2025-09-03T23:23:01.369793022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 3 23:23:01.370957 containerd[1524]: time="2025-09-03T23:23:01.370727022Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:01.374143 containerd[1524]: time="2025-09-03T23:23:01.373095725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:01.376279 containerd[1524]: time="2025-09-03T23:23:01.376243101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.26941347s" Sep 3 23:23:01.376279 containerd[1524]: time="2025-09-03T23:23:01.376277982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 3 23:23:01.378851 containerd[1524]: time="2025-09-03T23:23:01.378661846Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 3 23:23:01.386900 containerd[1524]: time="2025-09-03T23:23:01.386451863Z" level=info msg="Container 73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:01.400848 containerd[1524]: time="2025-09-03T23:23:01.400753162Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\"" Sep 3 23:23:01.402918 containerd[1524]: time="2025-09-03T23:23:01.401234383Z" level=info msg="StartContainer for \"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\"" Sep 3 23:23:01.402918 containerd[1524]: time="2025-09-03T23:23:01.402704566Z" level=info msg="connecting to shim 73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1" address="unix:///run/containerd/s/14fdfe3b0973dd6d667801a3a5a1f2ae11a7ae8c6ada1eecdcf44a8d14f65889" protocol=ttrpc version=3 Sep 3 23:23:01.424283 systemd[1]: Started cri-containerd-73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1.scope - libcontainer container 73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1. Sep 3 23:23:01.460852 containerd[1524]: time="2025-09-03T23:23:01.460797081Z" level=info msg="StartContainer for \"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\" returns successfully" Sep 3 23:23:02.006306 kubelet[2633]: E0903 23:23:02.006233 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:23:02.008608 systemd[1]: cri-containerd-73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1.scope: Deactivated successfully. Sep 3 23:23:02.009306 systemd[1]: cri-containerd-73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1.scope: Consumed 483ms CPU time, 179.7M memory peak, 3.3M read from disk, 165.8M written to disk. Sep 3 23:23:02.011358 containerd[1524]: time="2025-09-03T23:23:02.011214092Z" level=info msg="received exit event container_id:\"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\" id:\"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\" pid:3393 exited_at:{seconds:1756941782 nanos:10986803}" Sep 3 23:23:02.011358 containerd[1524]: time="2025-09-03T23:23:02.011303376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\" id:\"73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1\" pid:3393 exited_at:{seconds:1756941782 nanos:10986803}" Sep 3 23:23:02.024105 kubelet[2633]: I0903 23:23:02.023800 2633 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 3 23:23:02.036186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73cfa0ed92ecb10c772d73315958332bbaf7f365fbc4fb31af959e4b5958c2b1-rootfs.mount: Deactivated successfully. Sep 3 23:23:02.122006 kubelet[2633]: I0903 23:23:02.121595 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsn4\" (UniqueName: \"kubernetes.io/projected/f3d1e3e7-af2e-4654-8632-4fbd431a9553-kube-api-access-qvsn4\") pod \"calico-apiserver-9654f7d87-rfdl9\" (UID: \"f3d1e3e7-af2e-4654-8632-4fbd431a9553\") " pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" Sep 3 23:23:02.122006 kubelet[2633]: I0903 23:23:02.121676 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-backend-key-pair\") pod \"whisker-5cdb9878fd-ck754\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " pod="calico-system/whisker-5cdb9878fd-ck754" Sep 3 23:23:02.122006 kubelet[2633]: I0903 23:23:02.121706 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5fb2499-db73-4ef5-ab7e-7297d2bb1a00-goldmane-ca-bundle\") pod \"goldmane-7988f88666-ns4wm\" (UID: \"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00\") " pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.122006 kubelet[2633]: I0903 23:23:02.121729 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2hzz\" (UniqueName: \"kubernetes.io/projected/f5fb2499-db73-4ef5-ab7e-7297d2bb1a00-kube-api-access-r2hzz\") pod \"goldmane-7988f88666-ns4wm\" (UID: \"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00\") " pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.122006 kubelet[2633]: I0903 23:23:02.121778 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-ca-bundle\") pod \"whisker-5cdb9878fd-ck754\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " pod="calico-system/whisker-5cdb9878fd-ck754" Sep 3 23:23:02.122324 kubelet[2633]: I0903 23:23:02.121802 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs59\" (UniqueName: \"kubernetes.io/projected/bef2948a-779e-4db6-a501-40ae4816bc6d-kube-api-access-sxs59\") pod \"whisker-5cdb9878fd-ck754\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " pod="calico-system/whisker-5cdb9878fd-ck754" Sep 3 23:23:02.122324 kubelet[2633]: I0903 23:23:02.121844 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fb2499-db73-4ef5-ab7e-7297d2bb1a00-config\") pod \"goldmane-7988f88666-ns4wm\" (UID: \"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00\") " pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.122324 kubelet[2633]: I0903 23:23:02.121981 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlx9\" (UniqueName: \"kubernetes.io/projected/4588fcdf-fab0-4406-a001-bb8a1ef4c7c6-kube-api-access-7nlx9\") pod \"calico-kube-controllers-7dbddb5b44-vx69p\" (UID: \"4588fcdf-fab0-4406-a001-bb8a1ef4c7c6\") " pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" Sep 3 23:23:02.122324 kubelet[2633]: I0903 23:23:02.122026 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f813de9-1151-4a9a-939a-9297b66b14a4-config-volume\") pod \"coredns-7c65d6cfc9-th5cd\" (UID: \"8f813de9-1151-4a9a-939a-9297b66b14a4\") " pod="kube-system/coredns-7c65d6cfc9-th5cd" Sep 3 23:23:02.122324 kubelet[2633]: I0903 23:23:02.122059 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt4nl\" (UniqueName: \"kubernetes.io/projected/0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c-kube-api-access-dt4nl\") pod \"calico-apiserver-9654f7d87-p4m7t\" (UID: \"0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c\") " pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" Sep 3 23:23:02.122435 kubelet[2633]: I0903 23:23:02.122094 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3d1e3e7-af2e-4654-8632-4fbd431a9553-calico-apiserver-certs\") pod \"calico-apiserver-9654f7d87-rfdl9\" (UID: \"f3d1e3e7-af2e-4654-8632-4fbd431a9553\") " pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" Sep 3 23:23:02.122435 kubelet[2633]: I0903 23:23:02.122141 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f5fb2499-db73-4ef5-ab7e-7297d2bb1a00-goldmane-key-pair\") pod \"goldmane-7988f88666-ns4wm\" (UID: \"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00\") " pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.122435 kubelet[2633]: I0903 23:23:02.122267 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4588fcdf-fab0-4406-a001-bb8a1ef4c7c6-tigera-ca-bundle\") pod \"calico-kube-controllers-7dbddb5b44-vx69p\" (UID: \"4588fcdf-fab0-4406-a001-bb8a1ef4c7c6\") " pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" Sep 3 23:23:02.122435 kubelet[2633]: I0903 23:23:02.122324 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73073bd3-8a46-474b-99d6-49246ae163ea-config-volume\") pod \"coredns-7c65d6cfc9-fxvbp\" (UID: \"73073bd3-8a46-474b-99d6-49246ae163ea\") " pod="kube-system/coredns-7c65d6cfc9-fxvbp" Sep 3 23:23:02.122513 kubelet[2633]: I0903 23:23:02.122344 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqktw\" (UniqueName: \"kubernetes.io/projected/73073bd3-8a46-474b-99d6-49246ae163ea-kube-api-access-lqktw\") pod \"coredns-7c65d6cfc9-fxvbp\" (UID: \"73073bd3-8a46-474b-99d6-49246ae163ea\") " pod="kube-system/coredns-7c65d6cfc9-fxvbp" Sep 3 23:23:02.124127 kubelet[2633]: I0903 23:23:02.122534 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c-calico-apiserver-certs\") pod \"calico-apiserver-9654f7d87-p4m7t\" (UID: \"0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c\") " pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" Sep 3 23:23:02.124127 kubelet[2633]: I0903 23:23:02.122572 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrsj\" (UniqueName: \"kubernetes.io/projected/8f813de9-1151-4a9a-939a-9297b66b14a4-kube-api-access-xwrsj\") pod \"coredns-7c65d6cfc9-th5cd\" (UID: \"8f813de9-1151-4a9a-939a-9297b66b14a4\") " pod="kube-system/coredns-7c65d6cfc9-th5cd" Sep 3 23:23:02.135480 systemd[1]: Created slice kubepods-besteffort-pod4588fcdf_fab0_4406_a001_bb8a1ef4c7c6.slice - libcontainer container kubepods-besteffort-pod4588fcdf_fab0_4406_a001_bb8a1ef4c7c6.slice. Sep 3 23:23:02.145877 systemd[1]: Created slice kubepods-burstable-pod8f813de9_1151_4a9a_939a_9297b66b14a4.slice - libcontainer container kubepods-burstable-pod8f813de9_1151_4a9a_939a_9297b66b14a4.slice. Sep 3 23:23:02.152630 systemd[1]: Created slice kubepods-besteffort-podf3d1e3e7_af2e_4654_8632_4fbd431a9553.slice - libcontainer container kubepods-besteffort-podf3d1e3e7_af2e_4654_8632_4fbd431a9553.slice. Sep 3 23:23:02.158274 systemd[1]: Created slice kubepods-besteffort-podbef2948a_779e_4db6_a501_40ae4816bc6d.slice - libcontainer container kubepods-besteffort-podbef2948a_779e_4db6_a501_40ae4816bc6d.slice. Sep 3 23:23:02.163713 systemd[1]: Created slice kubepods-besteffort-podf5fb2499_db73_4ef5_ab7e_7297d2bb1a00.slice - libcontainer container kubepods-besteffort-podf5fb2499_db73_4ef5_ab7e_7297d2bb1a00.slice. Sep 3 23:23:02.174050 systemd[1]: Created slice kubepods-besteffort-pod0f99d5d6_a36d_49e5_aba8_5e27e0efcd9c.slice - libcontainer container kubepods-besteffort-pod0f99d5d6_a36d_49e5_aba8_5e27e0efcd9c.slice. Sep 3 23:23:02.182525 systemd[1]: Created slice kubepods-burstable-pod73073bd3_8a46_474b_99d6_49246ae163ea.slice - libcontainer container kubepods-burstable-pod73073bd3_8a46_474b_99d6_49246ae163ea.slice. Sep 3 23:23:02.442466 containerd[1524]: time="2025-09-03T23:23:02.442359548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbddb5b44-vx69p,Uid:4588fcdf-fab0-4406-a001-bb8a1ef4c7c6,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:02.451863 kubelet[2633]: E0903 23:23:02.451741 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:02.452585 containerd[1524]: time="2025-09-03T23:23:02.452176916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-th5cd,Uid:8f813de9-1151-4a9a-939a-9297b66b14a4,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:02.458529 containerd[1524]: time="2025-09-03T23:23:02.458499178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-rfdl9,Uid:f3d1e3e7-af2e-4654-8632-4fbd431a9553,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:23:02.468225 containerd[1524]: time="2025-09-03T23:23:02.468099497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdb9878fd-ck754,Uid:bef2948a-779e-4db6-a501-40ae4816bc6d,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:02.468656 containerd[1524]: time="2025-09-03T23:23:02.468627279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ns4wm,Uid:f5fb2499-db73-4ef5-ab7e-7297d2bb1a00,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:02.483355 containerd[1524]: time="2025-09-03T23:23:02.483316688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-p4m7t,Uid:0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:23:02.485174 kubelet[2633]: E0903 23:23:02.485074 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:02.486369 containerd[1524]: time="2025-09-03T23:23:02.486339094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fxvbp,Uid:73073bd3-8a46-474b-99d6-49246ae163ea,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:02.572062 containerd[1524]: time="2025-09-03T23:23:02.571945207Z" level=error msg="Failed to destroy network for sandbox \"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.577365 containerd[1524]: time="2025-09-03T23:23:02.577302189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-th5cd,Uid:8f813de9-1151-4a9a-939a-9297b66b14a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.579417 kubelet[2633]: E0903 23:23:02.579177 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.580983 kubelet[2633]: E0903 23:23:02.580930 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-th5cd" Sep 3 23:23:02.585985 kubelet[2633]: E0903 23:23:02.585926 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-th5cd" Sep 3 23:23:02.586135 kubelet[2633]: E0903 23:23:02.586028 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-th5cd_kube-system(8f813de9-1151-4a9a-939a-9297b66b14a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-th5cd_kube-system(8f813de9-1151-4a9a-939a-9297b66b14a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8b2a603825ddeeb0b98ea1e4e66c4ce6c58ab153fc19d40bfbe0706b8d09b5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-th5cd" podUID="8f813de9-1151-4a9a-939a-9297b66b14a4" Sep 3 23:23:02.589282 containerd[1524]: time="2025-09-03T23:23:02.589134361Z" level=error msg="Failed to destroy network for sandbox \"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.594877 containerd[1524]: time="2025-09-03T23:23:02.594804436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdb9878fd-ck754,Uid:bef2948a-779e-4db6-a501-40ae4816bc6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.595341 containerd[1524]: time="2025-09-03T23:23:02.594855198Z" level=error msg="Failed to destroy network for sandbox \"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.595412 kubelet[2633]: E0903 23:23:02.595287 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.595412 kubelet[2633]: E0903 23:23:02.595357 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cdb9878fd-ck754" Sep 3 23:23:02.595412 kubelet[2633]: E0903 23:23:02.595392 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cdb9878fd-ck754" Sep 3 23:23:02.595491 kubelet[2633]: E0903 23:23:02.595436 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cdb9878fd-ck754_calico-system(bef2948a-779e-4db6-a501-40ae4816bc6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cdb9878fd-ck754_calico-system(bef2948a-779e-4db6-a501-40ae4816bc6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c274c6bd7beaf06dde30b6f38bdb16518edc91ee3a3289d64d646a0b5eabd272\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cdb9878fd-ck754" podUID="bef2948a-779e-4db6-a501-40ae4816bc6d" Sep 3 23:23:02.596537 containerd[1524]: time="2025-09-03T23:23:02.596496506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fxvbp,Uid:73073bd3-8a46-474b-99d6-49246ae163ea,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.597311 containerd[1524]: time="2025-09-03T23:23:02.597197455Z" level=error msg="Failed to destroy network for sandbox \"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.597373 kubelet[2633]: E0903 23:23:02.597232 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.597373 kubelet[2633]: E0903 23:23:02.597286 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fxvbp" Sep 3 23:23:02.597373 kubelet[2633]: E0903 23:23:02.597303 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fxvbp" Sep 3 23:23:02.597455 kubelet[2633]: E0903 23:23:02.597338 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fxvbp_kube-system(73073bd3-8a46-474b-99d6-49246ae163ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fxvbp_kube-system(73073bd3-8a46-474b-99d6-49246ae163ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d72e46b1a12dd8ce1bd238ebf53c95d2603525d4aa9603ab1931a12a110cd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fxvbp" podUID="73073bd3-8a46-474b-99d6-49246ae163ea" Sep 3 23:23:02.597564 containerd[1524]: time="2025-09-03T23:23:02.597532189Z" level=error msg="Failed to destroy network for sandbox \"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.597777 containerd[1524]: time="2025-09-03T23:23:02.597735158Z" level=error msg="Failed to destroy network for sandbox \"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.598346 containerd[1524]: time="2025-09-03T23:23:02.598304221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-rfdl9,Uid:f3d1e3e7-af2e-4654-8632-4fbd431a9553,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.598801 kubelet[2633]: E0903 23:23:02.598496 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.598801 kubelet[2633]: E0903 23:23:02.598541 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" Sep 3 23:23:02.598801 kubelet[2633]: E0903 23:23:02.598560 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" Sep 3 23:23:02.598933 kubelet[2633]: E0903 23:23:02.598600 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9654f7d87-rfdl9_calico-apiserver(f3d1e3e7-af2e-4654-8632-4fbd431a9553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9654f7d87-rfdl9_calico-apiserver(f3d1e3e7-af2e-4654-8632-4fbd431a9553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70cfa7199feb49c11dbbcfe2d714da55a17c1d11c6445cb4d2358134923869d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" podUID="f3d1e3e7-af2e-4654-8632-4fbd431a9553" Sep 3 23:23:02.599278 containerd[1524]: time="2025-09-03T23:23:02.599151496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbddb5b44-vx69p,Uid:4588fcdf-fab0-4406-a001-bb8a1ef4c7c6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.599363 kubelet[2633]: E0903 23:23:02.599317 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.599363 kubelet[2633]: E0903 23:23:02.599349 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" Sep 3 23:23:02.599413 kubelet[2633]: E0903 23:23:02.599364 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" Sep 3 23:23:02.599413 kubelet[2633]: E0903 23:23:02.599395 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dbddb5b44-vx69p_calico-system(4588fcdf-fab0-4406-a001-bb8a1ef4c7c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dbddb5b44-vx69p_calico-system(4588fcdf-fab0-4406-a001-bb8a1ef4c7c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"196350ce1e2869d0b6dc4f9a63e87dd7b9a26658340138718484da3bbeee6932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" podUID="4588fcdf-fab0-4406-a001-bb8a1ef4c7c6" Sep 3 23:23:02.599810 containerd[1524]: time="2025-09-03T23:23:02.599781843Z" level=error msg="Failed to destroy network for sandbox \"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.602488 containerd[1524]: time="2025-09-03T23:23:02.602399511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ns4wm,Uid:f5fb2499-db73-4ef5-ab7e-7297d2bb1a00,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.602689 kubelet[2633]: E0903 23:23:02.602593 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.602689 kubelet[2633]: E0903 23:23:02.602638 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.602689 kubelet[2633]: E0903 23:23:02.602669 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-ns4wm" Sep 3 23:23:02.602775 kubelet[2633]: E0903 23:23:02.602709 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-ns4wm_calico-system(f5fb2499-db73-4ef5-ab7e-7297d2bb1a00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-ns4wm_calico-system(f5fb2499-db73-4ef5-ab7e-7297d2bb1a00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b265c72b42fa98fa5eb33590126f544b51b186e6256b701abc8787b47285f02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-ns4wm" podUID="f5fb2499-db73-4ef5-ab7e-7297d2bb1a00" Sep 3 23:23:02.603255 containerd[1524]: time="2025-09-03T23:23:02.603161503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-p4m7t,Uid:0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.603475 kubelet[2633]: E0903 23:23:02.603436 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:02.603608 kubelet[2633]: E0903 23:23:02.603484 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" Sep 3 23:23:02.603608 kubelet[2633]: E0903 23:23:02.603503 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" Sep 3 23:23:02.603608 kubelet[2633]: E0903 23:23:02.603539 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9654f7d87-p4m7t_calico-apiserver(0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9654f7d87-p4m7t_calico-apiserver(0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"074468637f5f6f1be32e0dd3927fe57761cab8c7e556cffbd8f13a3b11170116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" podUID="0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c" Sep 3 23:23:03.138760 containerd[1524]: time="2025-09-03T23:23:03.138463016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 3 23:23:03.388175 systemd[1]: run-netns-cni\x2d3b3ac812\x2d8621\x2d6409\x2d1bb9\x2d5c03663f390c.mount: Deactivated successfully. Sep 3 23:23:03.388274 systemd[1]: run-netns-cni\x2dc498d8f7\x2d67ed\x2d0dbc\x2d3bb8\x2d216202411451.mount: Deactivated successfully. Sep 3 23:23:03.388320 systemd[1]: run-netns-cni\x2d842c17d0\x2d51d9\x2d7643\x2ddb1c\x2d21a5feb4428a.mount: Deactivated successfully. Sep 3 23:23:03.388362 systemd[1]: run-netns-cni\x2d0d3fd788\x2ddb01\x2d628b\x2ddcbe\x2de1110e4f57c9.mount: Deactivated successfully. Sep 3 23:23:04.015654 systemd[1]: Created slice kubepods-besteffort-pod6f905e39_f56b_4766_b50d_574df013d5be.slice - libcontainer container kubepods-besteffort-pod6f905e39_f56b_4766_b50d_574df013d5be.slice. Sep 3 23:23:04.019038 containerd[1524]: time="2025-09-03T23:23:04.019003505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw6hs,Uid:6f905e39-f56b-4766-b50d-574df013d5be,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:04.077122 containerd[1524]: time="2025-09-03T23:23:04.077054567Z" level=error msg="Failed to destroy network for sandbox \"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:04.078923 systemd[1]: run-netns-cni\x2d9cc39bbe\x2df89f\x2db157\x2d83e3\x2d9e735ca080f9.mount: Deactivated successfully. Sep 3 23:23:04.079957 containerd[1524]: time="2025-09-03T23:23:04.079847034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw6hs,Uid:6f905e39-f56b-4766-b50d-574df013d5be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:04.080429 kubelet[2633]: E0903 23:23:04.080316 2633 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:23:04.080429 kubelet[2633]: E0903 23:23:04.080388 2633 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:23:04.080429 kubelet[2633]: E0903 23:23:04.080406 2633 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw6hs" Sep 3 23:23:04.081040 kubelet[2633]: E0903 23:23:04.080797 2633 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zw6hs_calico-system(6f905e39-f56b-4766-b50d-574df013d5be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zw6hs_calico-system(6f905e39-f56b-4766-b50d-574df013d5be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab080b8b9cc8ddbd68829ea80992587300816dfc6d917ed7130cc0e9cc6d52a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zw6hs" podUID="6f905e39-f56b-4766-b50d-574df013d5be" Sep 3 23:23:05.935696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount104837881.mount: Deactivated successfully. Sep 3 23:23:06.353968 containerd[1524]: time="2025-09-03T23:23:06.353905829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 3 23:23:06.356955 containerd[1524]: time="2025-09-03T23:23:06.356915656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.218374277s" Sep 3 23:23:06.356955 containerd[1524]: time="2025-09-03T23:23:06.356955177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 3 23:23:06.361622 containerd[1524]: time="2025-09-03T23:23:06.361569620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:06.362587 containerd[1524]: time="2025-09-03T23:23:06.362524534Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:06.363033 containerd[1524]: time="2025-09-03T23:23:06.362996431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:06.365182 containerd[1524]: time="2025-09-03T23:23:06.364906619Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 3 23:23:06.376791 containerd[1524]: time="2025-09-03T23:23:06.376750478Z" level=info msg="Container 0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:06.393015 containerd[1524]: time="2025-09-03T23:23:06.392953892Z" level=info msg="CreateContainer within sandbox \"ccb3c66b723e8d87b645e165131f0781fef023877811fe3748bda063eb0e1763\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\"" Sep 3 23:23:06.393530 containerd[1524]: time="2025-09-03T23:23:06.393484831Z" level=info msg="StartContainer for \"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\"" Sep 3 23:23:06.395976 containerd[1524]: time="2025-09-03T23:23:06.395927557Z" level=info msg="connecting to shim 0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697" address="unix:///run/containerd/s/14fdfe3b0973dd6d667801a3a5a1f2ae11a7ae8c6ada1eecdcf44a8d14f65889" protocol=ttrpc version=3 Sep 3 23:23:06.419328 systemd[1]: Started cri-containerd-0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697.scope - libcontainer container 0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697. Sep 3 23:23:06.458271 containerd[1524]: time="2025-09-03T23:23:06.458225204Z" level=info msg="StartContainer for \"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\" returns successfully" Sep 3 23:23:06.600870 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 3 23:23:06.601000 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 3 23:23:06.851975 kubelet[2633]: I0903 23:23:06.851933 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-ca-bundle\") pod \"bef2948a-779e-4db6-a501-40ae4816bc6d\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " Sep 3 23:23:06.851975 kubelet[2633]: I0903 23:23:06.851971 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxs59\" (UniqueName: \"kubernetes.io/projected/bef2948a-779e-4db6-a501-40ae4816bc6d-kube-api-access-sxs59\") pod \"bef2948a-779e-4db6-a501-40ae4816bc6d\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " Sep 3 23:23:06.852684 kubelet[2633]: I0903 23:23:06.851999 2633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-backend-key-pair\") pod \"bef2948a-779e-4db6-a501-40ae4816bc6d\" (UID: \"bef2948a-779e-4db6-a501-40ae4816bc6d\") " Sep 3 23:23:06.854995 kubelet[2633]: I0903 23:23:06.854960 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bef2948a-779e-4db6-a501-40ae4816bc6d" (UID: "bef2948a-779e-4db6-a501-40ae4816bc6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 3 23:23:06.861937 kubelet[2633]: I0903 23:23:06.861873 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bef2948a-779e-4db6-a501-40ae4816bc6d" (UID: "bef2948a-779e-4db6-a501-40ae4816bc6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 3 23:23:06.862660 kubelet[2633]: I0903 23:23:06.862615 2633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bef2948a-779e-4db6-a501-40ae4816bc6d-kube-api-access-sxs59" (OuterVolumeSpecName: "kube-api-access-sxs59") pod "bef2948a-779e-4db6-a501-40ae4816bc6d" (UID: "bef2948a-779e-4db6-a501-40ae4816bc6d"). InnerVolumeSpecName "kube-api-access-sxs59". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 3 23:23:06.936620 systemd[1]: var-lib-kubelet-pods-bef2948a\x2d779e\x2d4db6\x2da501\x2d40ae4816bc6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsxs59.mount: Deactivated successfully. Sep 3 23:23:06.936722 systemd[1]: var-lib-kubelet-pods-bef2948a\x2d779e\x2d4db6\x2da501\x2d40ae4816bc6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 3 23:23:06.952615 kubelet[2633]: I0903 23:23:06.952567 2633 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 3 23:23:06.952615 kubelet[2633]: I0903 23:23:06.952604 2633 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxs59\" (UniqueName: \"kubernetes.io/projected/bef2948a-779e-4db6-a501-40ae4816bc6d-kube-api-access-sxs59\") on node \"localhost\" DevicePath \"\"" Sep 3 23:23:06.952615 kubelet[2633]: I0903 23:23:06.952615 2633 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bef2948a-779e-4db6-a501-40ae4816bc6d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 3 23:23:07.018596 systemd[1]: Removed slice kubepods-besteffort-podbef2948a_779e_4db6_a501_40ae4816bc6d.slice - libcontainer container kubepods-besteffort-podbef2948a_779e_4db6_a501_40ae4816bc6d.slice. Sep 3 23:23:07.173698 kubelet[2633]: I0903 23:23:07.173536 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-snkxh" podStartSLOduration=1.38958686 podStartE2EDuration="13.173518039s" podCreationTimestamp="2025-09-03 23:22:54 +0000 UTC" firstStartedPulling="2025-09-03 23:22:54.57392931 +0000 UTC m=+17.657687013" lastFinishedPulling="2025-09-03 23:23:06.357860489 +0000 UTC m=+29.441618192" observedRunningTime="2025-09-03 23:23:07.172599567 +0000 UTC m=+30.256357270" watchObservedRunningTime="2025-09-03 23:23:07.173518039 +0000 UTC m=+30.257275742" Sep 3 23:23:07.247605 systemd[1]: Created slice kubepods-besteffort-pod02cda149_ccae_40d1_a902_46ac69800e65.slice - libcontainer container kubepods-besteffort-pod02cda149_ccae_40d1_a902_46ac69800e65.slice. Sep 3 23:23:07.355483 kubelet[2633]: I0903 23:23:07.355430 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqx74\" (UniqueName: \"kubernetes.io/projected/02cda149-ccae-40d1-a902-46ac69800e65-kube-api-access-pqx74\") pod \"whisker-6df8f45dbf-vn6cp\" (UID: \"02cda149-ccae-40d1-a902-46ac69800e65\") " pod="calico-system/whisker-6df8f45dbf-vn6cp" Sep 3 23:23:07.355629 kubelet[2633]: I0903 23:23:07.355519 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02cda149-ccae-40d1-a902-46ac69800e65-whisker-backend-key-pair\") pod \"whisker-6df8f45dbf-vn6cp\" (UID: \"02cda149-ccae-40d1-a902-46ac69800e65\") " pod="calico-system/whisker-6df8f45dbf-vn6cp" Sep 3 23:23:07.355629 kubelet[2633]: I0903 23:23:07.355541 2633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02cda149-ccae-40d1-a902-46ac69800e65-whisker-ca-bundle\") pod \"whisker-6df8f45dbf-vn6cp\" (UID: \"02cda149-ccae-40d1-a902-46ac69800e65\") " pod="calico-system/whisker-6df8f45dbf-vn6cp" Sep 3 23:23:07.552546 containerd[1524]: time="2025-09-03T23:23:07.552483093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df8f45dbf-vn6cp,Uid:02cda149-ccae-40d1-a902-46ac69800e65,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:07.739032 systemd-networkd[1437]: cali4552ef1a92a: Link UP Sep 3 23:23:07.739227 systemd-networkd[1437]: cali4552ef1a92a: Gained carrier Sep 3 23:23:07.754909 containerd[1524]: 2025-09-03 23:23:07.575 [INFO][3763] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 3 23:23:07.754909 containerd[1524]: 2025-09-03 23:23:07.610 [INFO][3763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0 whisker-6df8f45dbf- calico-system 02cda149-ccae-40d1-a902-46ac69800e65 870 0 2025-09-03 23:23:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6df8f45dbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6df8f45dbf-vn6cp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4552ef1a92a [] [] }} ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-" Sep 3 23:23:07.754909 containerd[1524]: 2025-09-03 23:23:07.610 [INFO][3763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.754909 containerd[1524]: 2025-09-03 23:23:07.681 [INFO][3778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" HandleID="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Workload="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.681 [INFO][3778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" HandleID="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Workload="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000285680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6df8f45dbf-vn6cp", "timestamp":"2025-09-03 23:23:07.681124803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.681 [INFO][3778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.681 [INFO][3778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.681 [INFO][3778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.694 [INFO][3778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" host="localhost" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.703 [INFO][3778] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.712 [INFO][3778] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.714 [INFO][3778] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.716 [INFO][3778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:07.755128 containerd[1524]: 2025-09-03 23:23:07.716 [INFO][3778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" host="localhost" Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.718 [INFO][3778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18 Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.723 [INFO][3778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" host="localhost" Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.728 [INFO][3778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" host="localhost" Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.728 [INFO][3778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" host="localhost" Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.728 [INFO][3778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:07.755438 containerd[1524]: 2025-09-03 23:23:07.728 [INFO][3778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" HandleID="k8s-pod-network.615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Workload="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.755547 containerd[1524]: 2025-09-03 23:23:07.731 [INFO][3763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0", GenerateName:"whisker-6df8f45dbf-", Namespace:"calico-system", SelfLink:"", UID:"02cda149-ccae-40d1-a902-46ac69800e65", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df8f45dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6df8f45dbf-vn6cp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4552ef1a92a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:07.755547 containerd[1524]: 2025-09-03 23:23:07.731 [INFO][3763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.755615 containerd[1524]: 2025-09-03 23:23:07.732 [INFO][3763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4552ef1a92a ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.755615 containerd[1524]: 2025-09-03 23:23:07.739 [INFO][3763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.755654 containerd[1524]: 2025-09-03 23:23:07.740 [INFO][3763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0", GenerateName:"whisker-6df8f45dbf-", Namespace:"calico-system", SelfLink:"", UID:"02cda149-ccae-40d1-a902-46ac69800e65", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df8f45dbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18", Pod:"whisker-6df8f45dbf-vn6cp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4552ef1a92a", MAC:"42:51:a1:5d:35:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:07.755703 containerd[1524]: 2025-09-03 23:23:07.752 [INFO][3763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" Namespace="calico-system" Pod="whisker-6df8f45dbf-vn6cp" WorkloadEndpoint="localhost-k8s-whisker--6df8f45dbf--vn6cp-eth0" Sep 3 23:23:07.832891 containerd[1524]: time="2025-09-03T23:23:07.832286763Z" level=info msg="connecting to shim 615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18" address="unix:///run/containerd/s/9481bdba4a7ff4a8a699b94d0be357aa438f6cc7ab03cc7650d9280ab8eb80d6" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:07.861264 systemd[1]: Started cri-containerd-615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18.scope - libcontainer container 615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18. Sep 3 23:23:07.872264 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:07.923705 containerd[1524]: time="2025-09-03T23:23:07.923655041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df8f45dbf-vn6cp,Uid:02cda149-ccae-40d1-a902-46ac69800e65,Namespace:calico-system,Attempt:0,} returns sandbox id \"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18\"" Sep 3 23:23:07.934487 containerd[1524]: time="2025-09-03T23:23:07.934449689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 3 23:23:08.153722 kubelet[2633]: I0903 23:23:08.153623 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:08.786729 containerd[1524]: time="2025-09-03T23:23:08.786664784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:08.788539 containerd[1524]: time="2025-09-03T23:23:08.788511565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 3 23:23:08.789205 containerd[1524]: time="2025-09-03T23:23:08.789161787Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:08.791353 containerd[1524]: time="2025-09-03T23:23:08.791310897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:08.792124 containerd[1524]: time="2025-09-03T23:23:08.792085963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 857.593832ms" Sep 3 23:23:08.792303 containerd[1524]: time="2025-09-03T23:23:08.792205007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 3 23:23:08.794179 containerd[1524]: time="2025-09-03T23:23:08.794155551Z" level=info msg="CreateContainer within sandbox \"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 3 23:23:08.803342 containerd[1524]: time="2025-09-03T23:23:08.803308292Z" level=info msg="Container a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:08.824798 containerd[1524]: time="2025-09-03T23:23:08.824583273Z" level=info msg="CreateContainer within sandbox \"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637\"" Sep 3 23:23:08.825430 containerd[1524]: time="2025-09-03T23:23:08.825395259Z" level=info msg="StartContainer for \"a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637\"" Sep 3 23:23:08.826723 containerd[1524]: time="2025-09-03T23:23:08.826688782Z" level=info msg="connecting to shim a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637" address="unix:///run/containerd/s/9481bdba4a7ff4a8a699b94d0be357aa438f6cc7ab03cc7650d9280ab8eb80d6" protocol=ttrpc version=3 Sep 3 23:23:08.848264 systemd[1]: Started cri-containerd-a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637.scope - libcontainer container a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637. Sep 3 23:23:08.891403 containerd[1524]: time="2025-09-03T23:23:08.891357991Z" level=info msg="StartContainer for \"a5e9f1271cd254e96c09ef43f773a4cff4e5ad0a911a34167fa75ab9d0121637\" returns successfully" Sep 3 23:23:08.893624 containerd[1524]: time="2025-09-03T23:23:08.893536583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 3 23:23:09.008784 kubelet[2633]: I0903 23:23:09.008729 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bef2948a-779e-4db6-a501-40ae4816bc6d" path="/var/lib/kubelet/pods/bef2948a-779e-4db6-a501-40ae4816bc6d/volumes" Sep 3 23:23:09.250955 kubelet[2633]: I0903 23:23:09.250777 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:09.251274 kubelet[2633]: E0903 23:23:09.251126 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:09.805321 systemd-networkd[1437]: cali4552ef1a92a: Gained IPv6LL Sep 3 23:23:10.163244 kubelet[2633]: E0903 23:23:10.162927 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:10.233327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599548064.mount: Deactivated successfully. Sep 3 23:23:10.283829 containerd[1524]: time="2025-09-03T23:23:10.283667228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:10.285745 containerd[1524]: time="2025-09-03T23:23:10.285712411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 3 23:23:10.286925 containerd[1524]: time="2025-09-03T23:23:10.286885887Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:10.289052 containerd[1524]: time="2025-09-03T23:23:10.289014153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:10.289882 containerd[1524]: time="2025-09-03T23:23:10.289849058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.396221593s" Sep 3 23:23:10.289882 containerd[1524]: time="2025-09-03T23:23:10.289881699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 3 23:23:10.291873 containerd[1524]: time="2025-09-03T23:23:10.291834319Z" level=info msg="CreateContainer within sandbox \"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 3 23:23:10.306284 containerd[1524]: time="2025-09-03T23:23:10.306244122Z" level=info msg="Container b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:10.325533 containerd[1524]: time="2025-09-03T23:23:10.325473473Z" level=info msg="CreateContainer within sandbox \"615184e17a91cf65ffd9dbd1e87def12f248af004e0794258764d48b661e4d18\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63\"" Sep 3 23:23:10.326224 containerd[1524]: time="2025-09-03T23:23:10.326194495Z" level=info msg="StartContainer for \"b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63\"" Sep 3 23:23:10.328103 containerd[1524]: time="2025-09-03T23:23:10.328066072Z" level=info msg="connecting to shim b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63" address="unix:///run/containerd/s/9481bdba4a7ff4a8a699b94d0be357aa438f6cc7ab03cc7650d9280ab8eb80d6" protocol=ttrpc version=3 Sep 3 23:23:10.353321 systemd[1]: Started cri-containerd-b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63.scope - libcontainer container b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63. Sep 3 23:23:10.401969 containerd[1524]: time="2025-09-03T23:23:10.401865499Z" level=info msg="StartContainer for \"b1e410e39038544c7d1315e3f34da5f7151e64f2279ec6ee03541e70c6186a63\" returns successfully" Sep 3 23:23:10.513829 systemd-networkd[1437]: vxlan.calico: Link UP Sep 3 23:23:10.513837 systemd-networkd[1437]: vxlan.calico: Gained carrier Sep 3 23:23:11.185701 kubelet[2633]: I0903 23:23:11.185636 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6df8f45dbf-vn6cp" podStartSLOduration=1.829102347 podStartE2EDuration="4.185618872s" podCreationTimestamp="2025-09-03 23:23:07 +0000 UTC" firstStartedPulling="2025-09-03 23:23:07.934003434 +0000 UTC m=+31.017761137" lastFinishedPulling="2025-09-03 23:23:10.290519959 +0000 UTC m=+33.374277662" observedRunningTime="2025-09-03 23:23:11.18554663 +0000 UTC m=+34.269304373" watchObservedRunningTime="2025-09-03 23:23:11.185618872 +0000 UTC m=+34.269376535" Sep 3 23:23:12.557292 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Sep 3 23:23:13.006990 kubelet[2633]: E0903 23:23:13.006766 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:13.007349 containerd[1524]: time="2025-09-03T23:23:13.007147505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-th5cd,Uid:8f813de9-1151-4a9a-939a-9297b66b14a4,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:13.117313 systemd-networkd[1437]: calic209619f04f: Link UP Sep 3 23:23:13.117725 systemd-networkd[1437]: calic209619f04f: Gained carrier Sep 3 23:23:13.136023 containerd[1524]: 2025-09-03 23:23:13.051 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0 coredns-7c65d6cfc9- kube-system 8f813de9-1151-4a9a-939a-9297b66b14a4 802 0 2025-09-03 23:22:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-th5cd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic209619f04f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-" Sep 3 23:23:13.136023 containerd[1524]: 2025-09-03 23:23:13.051 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136023 containerd[1524]: 2025-09-03 23:23:13.076 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" HandleID="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Workload="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.076 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" HandleID="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Workload="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-th5cd", "timestamp":"2025-09-03 23:23:13.07682881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.077 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.077 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.077 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.088 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" host="localhost" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.092 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.097 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.098 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.101 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:13.136258 containerd[1524]: 2025-09-03 23:23:13.101 [INFO][4198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" host="localhost" Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.103 [INFO][4198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4 Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.107 [INFO][4198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" host="localhost" Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.113 [INFO][4198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" host="localhost" Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.113 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" host="localhost" Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.113 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:13.136540 containerd[1524]: 2025-09-03 23:23:13.113 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" HandleID="k8s-pod-network.7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Workload="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136680 containerd[1524]: 2025-09-03 23:23:13.115 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8f813de9-1151-4a9a-939a-9297b66b14a4", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-th5cd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic209619f04f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:13.136749 containerd[1524]: 2025-09-03 23:23:13.115 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136749 containerd[1524]: 2025-09-03 23:23:13.115 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic209619f04f ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136749 containerd[1524]: 2025-09-03 23:23:13.117 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.136900 containerd[1524]: 2025-09-03 23:23:13.118 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8f813de9-1151-4a9a-939a-9297b66b14a4", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4", Pod:"coredns-7c65d6cfc9-th5cd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic209619f04f", MAC:"12:3d:16:8d:1a:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:13.136900 containerd[1524]: 2025-09-03 23:23:13.128 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-th5cd" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--th5cd-eth0" Sep 3 23:23:13.175502 containerd[1524]: time="2025-09-03T23:23:13.175455003Z" level=info msg="connecting to shim 7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4" address="unix:///run/containerd/s/e09f2eb88130cfedf135bd07d7fed14bb8447f2b1728e11b49eb53df31cc987c" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:13.212699 systemd[1]: Started cri-containerd-7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4.scope - libcontainer container 7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4. Sep 3 23:23:13.230256 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:13.272426 containerd[1524]: time="2025-09-03T23:23:13.272313466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-th5cd,Uid:8f813de9-1151-4a9a-939a-9297b66b14a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4\"" Sep 3 23:23:13.273479 kubelet[2633]: E0903 23:23:13.273235 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:13.275623 containerd[1524]: time="2025-09-03T23:23:13.275582317Z" level=info msg="CreateContainer within sandbox \"7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:23:13.288570 containerd[1524]: time="2025-09-03T23:23:13.288369794Z" level=info msg="Container 0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:13.290046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386739780.mount: Deactivated successfully. Sep 3 23:23:13.295314 containerd[1524]: time="2025-09-03T23:23:13.295259027Z" level=info msg="CreateContainer within sandbox \"7ff366b3dfce917787554c365043546e3506d61a090e3cd7a862222d95199bf4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7\"" Sep 3 23:23:13.296762 containerd[1524]: time="2025-09-03T23:23:13.296722387Z" level=info msg="StartContainer for \"0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7\"" Sep 3 23:23:13.297837 containerd[1524]: time="2025-09-03T23:23:13.297788937Z" level=info msg="connecting to shim 0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7" address="unix:///run/containerd/s/e09f2eb88130cfedf135bd07d7fed14bb8447f2b1728e11b49eb53df31cc987c" protocol=ttrpc version=3 Sep 3 23:23:13.320332 systemd[1]: Started cri-containerd-0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7.scope - libcontainer container 0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7. Sep 3 23:23:13.347104 containerd[1524]: time="2025-09-03T23:23:13.347038672Z" level=info msg="StartContainer for \"0ba8a991130a143abda743808959dc6727a8918b557d16004d96a984093235f7\" returns successfully" Sep 3 23:23:14.012208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658835487.mount: Deactivated successfully. Sep 3 23:23:14.185330 kubelet[2633]: E0903 23:23:14.185295 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:14.197132 kubelet[2633]: I0903 23:23:14.196955 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-th5cd" podStartSLOduration=32.196935631 podStartE2EDuration="32.196935631s" podCreationTimestamp="2025-09-03 23:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:14.195495832 +0000 UTC m=+37.279253535" watchObservedRunningTime="2025-09-03 23:23:14.196935631 +0000 UTC m=+37.280693334" Sep 3 23:23:14.413284 systemd-networkd[1437]: calic209619f04f: Gained IPv6LL Sep 3 23:23:15.006823 kubelet[2633]: E0903 23:23:15.006759 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:15.007271 containerd[1524]: time="2025-09-03T23:23:15.007232856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-rfdl9,Uid:f3d1e3e7-af2e-4654-8632-4fbd431a9553,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:23:15.007736 containerd[1524]: time="2025-09-03T23:23:15.007245256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fxvbp,Uid:73073bd3-8a46-474b-99d6-49246ae163ea,Namespace:kube-system,Attempt:0,}" Sep 3 23:23:15.119667 systemd-networkd[1437]: cali0060d04d4bd: Link UP Sep 3 23:23:15.120379 systemd-networkd[1437]: cali0060d04d4bd: Gained carrier Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.053 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0 calico-apiserver-9654f7d87- calico-apiserver f3d1e3e7-af2e-4654-8632-4fbd431a9553 803 0 2025-09-03 23:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9654f7d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9654f7d87-rfdl9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0060d04d4bd [] [] }} ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.053 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.079 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" HandleID="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Workload="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.079 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" HandleID="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Workload="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000596b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9654f7d87-rfdl9", "timestamp":"2025-09-03 23:23:15.07960076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.079 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.079 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.079 [INFO][4331] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.091 [INFO][4331] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.095 [INFO][4331] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.098 [INFO][4331] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.100 [INFO][4331] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.102 [INFO][4331] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.102 [INFO][4331] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.104 [INFO][4331] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.107 [INFO][4331] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.112 [INFO][4331] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.112 [INFO][4331] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" host="localhost" Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.113 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:15.135144 containerd[1524]: 2025-09-03 23:23:15.113 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" HandleID="k8s-pod-network.d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Workload="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.116 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0", GenerateName:"calico-apiserver-9654f7d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d1e3e7-af2e-4654-8632-4fbd431a9553", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9654f7d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9654f7d87-rfdl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0060d04d4bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.116 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.116 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0060d04d4bd ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.121 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.123 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0", GenerateName:"calico-apiserver-9654f7d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3d1e3e7-af2e-4654-8632-4fbd431a9553", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9654f7d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d", Pod:"calico-apiserver-9654f7d87-rfdl9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0060d04d4bd", MAC:"aa:16:6d:fe:9d:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:15.136307 containerd[1524]: 2025-09-03 23:23:15.132 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-rfdl9" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--rfdl9-eth0" Sep 3 23:23:15.185776 kubelet[2633]: E0903 23:23:15.185751 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:15.207777 containerd[1524]: time="2025-09-03T23:23:15.207725292Z" level=info msg="connecting to shim d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d" address="unix:///run/containerd/s/9638fe8ae6c25c078cc7e9a87d496defc499b31ead3d37b92e45614809e19512" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:15.230164 systemd-networkd[1437]: cali26349cbe964: Link UP Sep 3 23:23:15.230440 systemd-networkd[1437]: cali26349cbe964: Gained carrier Sep 3 23:23:15.247280 systemd[1]: Started cri-containerd-d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d.scope - libcontainer container d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d. Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.059 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0 coredns-7c65d6cfc9- kube-system 73073bd3-8a46-474b-99d6-49246ae163ea 799 0 2025-09-03 23:22:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-fxvbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26349cbe964 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.059 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.086 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" HandleID="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Workload="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.086 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" HandleID="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Workload="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137570), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-fxvbp", "timestamp":"2025-09-03 23:23:15.086030769 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.086 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.113 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.113 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.192 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.198 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.204 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.206 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.210 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.210 [INFO][4337] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.211 [INFO][4337] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2 Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.216 [INFO][4337] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.224 [INFO][4337] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.224 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" host="localhost" Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.224 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:15.248047 containerd[1524]: 2025-09-03 23:23:15.224 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" HandleID="k8s-pod-network.0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Workload="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.227 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"73073bd3-8a46-474b-99d6-49246ae163ea", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-fxvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26349cbe964", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.228 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.228 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26349cbe964 ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.230 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.231 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"73073bd3-8a46-474b-99d6-49246ae163ea", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2", Pod:"coredns-7c65d6cfc9-fxvbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26349cbe964", MAC:"aa:8c:13:0d:6d:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:15.248528 containerd[1524]: 2025-09-03 23:23:15.243 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fxvbp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fxvbp-eth0" Sep 3 23:23:15.263969 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:15.283628 containerd[1524]: time="2025-09-03T23:23:15.283460885Z" level=info msg="connecting to shim 0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2" address="unix:///run/containerd/s/534e12d23d0c0dbedf56e97cad8311d407e579d454e11e969a3aabcc4820c4ee" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:15.297797 containerd[1524]: time="2025-09-03T23:23:15.297756701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-rfdl9,Uid:f3d1e3e7-af2e-4654-8632-4fbd431a9553,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d\"" Sep 3 23:23:15.304005 containerd[1524]: time="2025-09-03T23:23:15.303968905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 3 23:23:15.313334 systemd[1]: Started cri-containerd-0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2.scope - libcontainer container 0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2. Sep 3 23:23:15.325290 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:15.346911 containerd[1524]: time="2025-09-03T23:23:15.346857834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fxvbp,Uid:73073bd3-8a46-474b-99d6-49246ae163ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2\"" Sep 3 23:23:15.347709 kubelet[2633]: E0903 23:23:15.347683 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:15.350430 containerd[1524]: time="2025-09-03T23:23:15.350377486Z" level=info msg="CreateContainer within sandbox \"0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:23:15.366143 containerd[1524]: time="2025-09-03T23:23:15.366091380Z" level=info msg="Container 0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:15.370892 containerd[1524]: time="2025-09-03T23:23:15.370860745Z" level=info msg="CreateContainer within sandbox \"0add5a3aeafe1de75c96e5b3a49f982061171e224a1552f879aeed173e198cf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9\"" Sep 3 23:23:15.372442 containerd[1524]: time="2025-09-03T23:23:15.371485282Z" level=info msg="StartContainer for \"0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9\"" Sep 3 23:23:15.372646 containerd[1524]: time="2025-09-03T23:23:15.372613591Z" level=info msg="connecting to shim 0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9" address="unix:///run/containerd/s/534e12d23d0c0dbedf56e97cad8311d407e579d454e11e969a3aabcc4820c4ee" protocol=ttrpc version=3 Sep 3 23:23:15.392272 systemd[1]: Started cri-containerd-0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9.scope - libcontainer container 0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9. Sep 3 23:23:15.416272 containerd[1524]: time="2025-09-03T23:23:15.415772727Z" level=info msg="StartContainer for \"0a996f75eb00522be61eed00fd129a8da92cea073ae86001550cd9f3f3a255a9\" returns successfully" Sep 3 23:23:16.006823 containerd[1524]: time="2025-09-03T23:23:16.006774796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbddb5b44-vx69p,Uid:4588fcdf-fab0-4406-a001-bb8a1ef4c7c6,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:16.131354 systemd-networkd[1437]: cali619aa03aa98: Link UP Sep 3 23:23:16.131547 systemd-networkd[1437]: cali619aa03aa98: Gained carrier Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.046 [INFO][4503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0 calico-kube-controllers-7dbddb5b44- calico-system 4588fcdf-fab0-4406-a001-bb8a1ef4c7c6 804 0 2025-09-03 23:22:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dbddb5b44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dbddb5b44-vx69p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali619aa03aa98 [] [] }} ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.046 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.088 [INFO][4512] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" HandleID="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Workload="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.088 [INFO][4512] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" HandleID="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Workload="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000511430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dbddb5b44-vx69p", "timestamp":"2025-09-03 23:23:16.088277402 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.088 [INFO][4512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.088 [INFO][4512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.088 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.097 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.102 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.106 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.107 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.110 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.110 [INFO][4512] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.112 [INFO][4512] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.117 [INFO][4512] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.124 [INFO][4512] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.124 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" host="localhost" Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.124 [INFO][4512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:16.149218 containerd[1524]: 2025-09-03 23:23:16.124 [INFO][4512] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" HandleID="k8s-pod-network.8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Workload="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.126 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0", GenerateName:"calico-kube-controllers-7dbddb5b44-", Namespace:"calico-system", SelfLink:"", UID:"4588fcdf-fab0-4406-a001-bb8a1ef4c7c6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dbddb5b44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dbddb5b44-vx69p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali619aa03aa98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.126 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.126 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali619aa03aa98 ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.131 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.133 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0", GenerateName:"calico-kube-controllers-7dbddb5b44-", Namespace:"calico-system", SelfLink:"", UID:"4588fcdf-fab0-4406-a001-bb8a1ef4c7c6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dbddb5b44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a", Pod:"calico-kube-controllers-7dbddb5b44-vx69p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali619aa03aa98", MAC:"e2:a9:04:21:11:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:16.151867 containerd[1524]: 2025-09-03 23:23:16.144 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" Namespace="calico-system" Pod="calico-kube-controllers-7dbddb5b44-vx69p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dbddb5b44--vx69p-eth0" Sep 3 23:23:16.197140 kubelet[2633]: E0903 23:23:16.195027 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:16.198711 kubelet[2633]: E0903 23:23:16.198617 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:16.250703 kubelet[2633]: I0903 23:23:16.250527 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fxvbp" podStartSLOduration=33.250508355 podStartE2EDuration="33.250508355s" podCreationTimestamp="2025-09-03 23:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:16.229540178 +0000 UTC m=+39.313297921" watchObservedRunningTime="2025-09-03 23:23:16.250508355 +0000 UTC m=+39.334266058" Sep 3 23:23:16.268917 containerd[1524]: time="2025-09-03T23:23:16.268802543Z" level=info msg="connecting to shim 8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a" address="unix:///run/containerd/s/828ac77107c1e998baf0b8e18e784b593867e1f2a515ebe65fe910c4357c6c66" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:16.306309 systemd[1]: Started cri-containerd-8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a.scope - libcontainer container 8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a. Sep 3 23:23:16.318727 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:16.333300 systemd-networkd[1437]: cali0060d04d4bd: Gained IPv6LL Sep 3 23:23:16.333787 systemd-networkd[1437]: cali26349cbe964: Gained IPv6LL Sep 3 23:23:16.339909 containerd[1524]: time="2025-09-03T23:23:16.339860241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dbddb5b44-vx69p,Uid:4588fcdf-fab0-4406-a001-bb8a1ef4c7c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a\"" Sep 3 23:23:16.742826 containerd[1524]: time="2025-09-03T23:23:16.742717912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.743792 containerd[1524]: time="2025-09-03T23:23:16.743764579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 3 23:23:16.744446 containerd[1524]: time="2025-09-03T23:23:16.744387355Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.746714 containerd[1524]: time="2025-09-03T23:23:16.746648493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:16.747243 containerd[1524]: time="2025-09-03T23:23:16.747194867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.443188721s" Sep 3 23:23:16.747243 containerd[1524]: time="2025-09-03T23:23:16.747229628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 3 23:23:16.749190 containerd[1524]: time="2025-09-03T23:23:16.748989953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 3 23:23:16.749973 containerd[1524]: time="2025-09-03T23:23:16.749926857Z" level=info msg="CreateContainer within sandbox \"d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:23:16.759961 containerd[1524]: time="2025-09-03T23:23:16.759902872Z" level=info msg="Container c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:16.761952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691129473.mount: Deactivated successfully. Sep 3 23:23:16.767391 containerd[1524]: time="2025-09-03T23:23:16.767345983Z" level=info msg="CreateContainer within sandbox \"d6131bcd17d995b126db4088605a3de625c02e8ff5f32d2391575c679efced6d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7\"" Sep 3 23:23:16.768074 containerd[1524]: time="2025-09-03T23:23:16.768044201Z" level=info msg="StartContainer for \"c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7\"" Sep 3 23:23:16.771022 containerd[1524]: time="2025-09-03T23:23:16.770991676Z" level=info msg="connecting to shim c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7" address="unix:///run/containerd/s/9638fe8ae6c25c078cc7e9a87d496defc499b31ead3d37b92e45614809e19512" protocol=ttrpc version=3 Sep 3 23:23:16.789262 systemd[1]: Started cri-containerd-c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7.scope - libcontainer container c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7. Sep 3 23:23:16.841425 containerd[1524]: time="2025-09-03T23:23:16.841360197Z" level=info msg="StartContainer for \"c921ff94b6e191567cff165d55714c2c4dfa68b38002682fac1b7683193918a7\" returns successfully" Sep 3 23:23:17.008395 containerd[1524]: time="2025-09-03T23:23:17.008362266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw6hs,Uid:6f905e39-f56b-4766-b50d-574df013d5be,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:17.009731 containerd[1524]: time="2025-09-03T23:23:17.009677459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ns4wm,Uid:f5fb2499-db73-4ef5-ab7e-7297d2bb1a00,Namespace:calico-system,Attempt:0,}" Sep 3 23:23:17.208024 systemd-networkd[1437]: cali188e3745868: Link UP Sep 3 23:23:17.208310 systemd-networkd[1437]: cali188e3745868: Gained carrier Sep 3 23:23:17.217059 kubelet[2633]: E0903 23:23:17.216032 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.072 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--ns4wm-eth0 goldmane-7988f88666- calico-system f5fb2499-db73-4ef5-ab7e-7297d2bb1a00 806 0 2025-09-03 23:22:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-ns4wm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali188e3745868 [] [] }} ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.073 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.125 [INFO][4657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" HandleID="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Workload="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.125 [INFO][4657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" HandleID="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Workload="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-ns4wm", "timestamp":"2025-09-03 23:23:17.125411863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.125 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.125 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.125 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.147 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.153 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.160 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.163 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.165 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.165 [INFO][4657] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.167 [INFO][4657] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512 Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.174 [INFO][4657] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.197 [INFO][4657] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.198 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" host="localhost" Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.198 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:17.232310 containerd[1524]: 2025-09-03 23:23:17.198 [INFO][4657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" HandleID="k8s-pod-network.ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Workload="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.201 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ns4wm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-ns4wm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali188e3745868", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.202 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.202 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali188e3745868 ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.208 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.211 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--ns4wm-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"f5fb2499-db73-4ef5-ab7e-7297d2bb1a00", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512", Pod:"goldmane-7988f88666-ns4wm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali188e3745868", MAC:"b6:5d:98:6d:8d:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:17.233007 containerd[1524]: 2025-09-03 23:23:17.228 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" Namespace="calico-system" Pod="goldmane-7988f88666-ns4wm" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--ns4wm-eth0" Sep 3 23:23:17.242310 kubelet[2633]: I0903 23:23:17.242036 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9654f7d87-rfdl9" podStartSLOduration=24.793771955 podStartE2EDuration="26.242021328s" podCreationTimestamp="2025-09-03 23:22:51 +0000 UTC" firstStartedPulling="2025-09-03 23:23:15.300218326 +0000 UTC m=+38.383976029" lastFinishedPulling="2025-09-03 23:23:16.748467699 +0000 UTC m=+39.832225402" observedRunningTime="2025-09-03 23:23:17.240675535 +0000 UTC m=+40.324433238" watchObservedRunningTime="2025-09-03 23:23:17.242021328 +0000 UTC m=+40.325779031" Sep 3 23:23:17.267379 containerd[1524]: time="2025-09-03T23:23:17.267279958Z" level=info msg="connecting to shim ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512" address="unix:///run/containerd/s/7e9431e283b4c1732b7521d896dfdff18f5cd59ac7e87ac2289c82f1888dd2e7" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:17.301493 systemd-networkd[1437]: cali7738bef2281: Link UP Sep 3 23:23:17.301666 systemd-networkd[1437]: cali7738bef2281: Gained carrier Sep 3 23:23:17.305602 systemd[1]: Started cri-containerd-ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512.scope - libcontainer container ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512. Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.069 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zw6hs-eth0 csi-node-driver- calico-system 6f905e39-f56b-4766-b50d-574df013d5be 699 0 2025-09-03 23:22:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zw6hs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7738bef2281 [] [] }} ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.070 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.129 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" HandleID="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Workload="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.130 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" HandleID="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Workload="localhost-k8s-csi--node--driver--zw6hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd680), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zw6hs", "timestamp":"2025-09-03 23:23:17.129199277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.130 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.198 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.198 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.244 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.254 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.266 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.269 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.272 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.272 [INFO][4655] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.274 [INFO][4655] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135 Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.279 [INFO][4655] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.290 [INFO][4655] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.292 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" host="localhost" Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.292 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:17.320180 containerd[1524]: 2025-09-03 23:23:17.292 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" HandleID="k8s-pod-network.988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Workload="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.298 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zw6hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f905e39-f56b-4766-b50d-574df013d5be", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zw6hs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7738bef2281", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.298 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.298 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7738bef2281 ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.300 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.301 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zw6hs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6f905e39-f56b-4766-b50d-574df013d5be", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135", Pod:"csi-node-driver-zw6hs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7738bef2281", MAC:"6e:48:b2:c6:1f:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:17.321009 containerd[1524]: 2025-09-03 23:23:17.316 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" Namespace="calico-system" Pod="csi-node-driver-zw6hs" WorkloadEndpoint="localhost-k8s-csi--node--driver--zw6hs-eth0" Sep 3 23:23:17.337698 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:17.363789 containerd[1524]: time="2025-09-03T23:23:17.363743601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-ns4wm,Uid:f5fb2499-db73-4ef5-ab7e-7297d2bb1a00,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512\"" Sep 3 23:23:17.365131 containerd[1524]: time="2025-09-03T23:23:17.365039394Z" level=info msg="connecting to shim 988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135" address="unix:///run/containerd/s/b50ececf0ef1e3fca41bad31c5cc09e65db2bd819f3389d3cca6a6054d402755" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:17.392794 systemd[1]: Started cri-containerd-988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135.scope - libcontainer container 988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135. Sep 3 23:23:17.407308 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:17.410798 kubelet[2633]: I0903 23:23:17.410768 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:17.421382 containerd[1524]: time="2025-09-03T23:23:17.421323516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw6hs,Uid:6f905e39-f56b-4766-b50d-574df013d5be,Namespace:calico-system,Attempt:0,} returns sandbox id \"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135\"" Sep 3 23:23:17.491374 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:35486.service - OpenSSH per-connection server daemon (10.0.0.1:35486). Sep 3 23:23:17.558091 sshd[4816]: Accepted publickey for core from 10.0.0.1 port 35486 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:17.558397 containerd[1524]: time="2025-09-03T23:23:17.558253368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\" id:\"23ab1b2178c8f78701fd5e948110e0189e6a5fb47bfa4e02a91751726e56189c\" pid:4809 exited_at:{seconds:1756941797 nanos:557963241}" Sep 3 23:23:17.559810 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:17.564147 systemd-logind[1496]: New session 8 of user core. Sep 3 23:23:17.568261 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:23:17.613280 systemd-networkd[1437]: cali619aa03aa98: Gained IPv6LL Sep 3 23:23:17.721065 containerd[1524]: time="2025-09-03T23:23:17.721007983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\" id:\"b617812663bf60dd38dc5273b7d724bff971378743abcc1201077ceda872c798\" pid:4839 exited_at:{seconds:1756941797 nanos:720723216}" Sep 3 23:23:17.944799 sshd[4825]: Connection closed by 10.0.0.1 port 35486 Sep 3 23:23:17.944758 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:17.949999 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:35486.service: Deactivated successfully. Sep 3 23:23:17.953579 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:23:17.955130 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:23:17.957099 systemd-logind[1496]: Removed session 8. Sep 3 23:23:18.043015 containerd[1524]: time="2025-09-03T23:23:18.042975699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-p4m7t,Uid:0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:23:18.226724 kubelet[2633]: I0903 23:23:18.226615 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:18.227695 kubelet[2633]: E0903 23:23:18.227190 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:23:18.246510 systemd-networkd[1437]: cali54fa56f5049: Link UP Sep 3 23:23:18.247406 systemd-networkd[1437]: cali54fa56f5049: Gained carrier Sep 3 23:23:18.253242 systemd-networkd[1437]: cali188e3745868: Gained IPv6LL Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.152 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0 calico-apiserver-9654f7d87- calico-apiserver 0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c 807 0 2025-09-03 23:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9654f7d87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9654f7d87-p4m7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54fa56f5049 [] [] }} ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.152 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.189 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" HandleID="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Workload="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.189 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" HandleID="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Workload="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9654f7d87-p4m7t", "timestamp":"2025-09-03 23:23:18.189677221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.189 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.190 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.190 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.201 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.211 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.216 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.219 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.223 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.223 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.227 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.234 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.241 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.241 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" host="localhost" Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.241 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:23:18.264936 containerd[1524]: 2025-09-03 23:23:18.242 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" HandleID="k8s-pod-network.1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Workload="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.244 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0", GenerateName:"calico-apiserver-9654f7d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9654f7d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9654f7d87-p4m7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54fa56f5049", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.244 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.244 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54fa56f5049 ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.247 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.248 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0", GenerateName:"calico-apiserver-9654f7d87-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9654f7d87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a", Pod:"calico-apiserver-9654f7d87-p4m7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54fa56f5049", MAC:"36:a7:7d:ed:26:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:23:18.265788 containerd[1524]: 2025-09-03 23:23:18.261 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" Namespace="calico-apiserver" Pod="calico-apiserver-9654f7d87-p4m7t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9654f7d87--p4m7t-eth0" Sep 3 23:23:18.297289 containerd[1524]: time="2025-09-03T23:23:18.297246433Z" level=info msg="connecting to shim 1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a" address="unix:///run/containerd/s/652dabc2cec6c982ccd4241d1a7d51d0647027ffd770083bf0215d505faea80b" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:23:18.339301 systemd[1]: Started cri-containerd-1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a.scope - libcontainer container 1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a. Sep 3 23:23:18.359807 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:23:18.424554 containerd[1524]: time="2025-09-03T23:23:18.424429442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9654f7d87-p4m7t,Uid:0f99d5d6-a36d-49e5-aba8-5e27e0efcd9c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a\"" Sep 3 23:23:18.440016 containerd[1524]: time="2025-09-03T23:23:18.439928778Z" level=info msg="CreateContainer within sandbox \"1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:23:18.456148 containerd[1524]: time="2025-09-03T23:23:18.456070810Z" level=info msg="Container eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:18.469946 containerd[1524]: time="2025-09-03T23:23:18.469771543Z" level=info msg="CreateContainer within sandbox \"1b7645938eb2da2dc9aebe680c49eec397be93d1472c1be6efe2f1a926ae775a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87\"" Sep 3 23:23:18.470909 containerd[1524]: time="2025-09-03T23:23:18.470884370Z" level=info msg="StartContainer for \"eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87\"" Sep 3 23:23:18.473416 containerd[1524]: time="2025-09-03T23:23:18.473391830Z" level=info msg="connecting to shim eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87" address="unix:///run/containerd/s/652dabc2cec6c982ccd4241d1a7d51d0647027ffd770083bf0215d505faea80b" protocol=ttrpc version=3 Sep 3 23:23:18.496280 systemd[1]: Started cri-containerd-eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87.scope - libcontainer container eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87. Sep 3 23:23:18.563781 containerd[1524]: time="2025-09-03T23:23:18.563730304Z" level=info msg="StartContainer for \"eb7cb13483144bc6ab4e221ce6f25a37b92961cb483c58c8ce9037b001580b87\" returns successfully" Sep 3 23:23:18.573283 systemd-networkd[1437]: cali7738bef2281: Gained IPv6LL Sep 3 23:23:18.648227 containerd[1524]: time="2025-09-03T23:23:18.648181955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.649216 containerd[1524]: time="2025-09-03T23:23:18.649102057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 3 23:23:18.650052 containerd[1524]: time="2025-09-03T23:23:18.650019439Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.655664 containerd[1524]: time="2025-09-03T23:23:18.655622255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:18.656431 containerd[1524]: time="2025-09-03T23:23:18.656402474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.907380441s" Sep 3 23:23:18.656473 containerd[1524]: time="2025-09-03T23:23:18.656437115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 3 23:23:18.658029 containerd[1524]: time="2025-09-03T23:23:18.657850989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 3 23:23:18.664328 containerd[1524]: time="2025-09-03T23:23:18.664298906Z" level=info msg="CreateContainer within sandbox \"8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 3 23:23:18.669777 containerd[1524]: time="2025-09-03T23:23:18.669748278Z" level=info msg="Container 8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:18.675986 containerd[1524]: time="2025-09-03T23:23:18.675950509Z" level=info msg="CreateContainer within sandbox \"8a5ce7ec4e259307f68fcc26a57ecbbb73a3bffe22c91650d27423ea38aa5d4a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\"" Sep 3 23:23:18.676828 containerd[1524]: time="2025-09-03T23:23:18.676791009Z" level=info msg="StartContainer for \"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\"" Sep 3 23:23:18.677975 containerd[1524]: time="2025-09-03T23:23:18.677925157Z" level=info msg="connecting to shim 8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5" address="unix:///run/containerd/s/828ac77107c1e998baf0b8e18e784b593867e1f2a515ebe65fe910c4357c6c66" protocol=ttrpc version=3 Sep 3 23:23:18.700275 systemd[1]: Started cri-containerd-8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5.scope - libcontainer container 8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5. Sep 3 23:23:18.737029 containerd[1524]: time="2025-09-03T23:23:18.736994391Z" level=info msg="StartContainer for \"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\" returns successfully" Sep 3 23:23:19.243846 kubelet[2633]: I0903 23:23:19.243785 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dbddb5b44-vx69p" podStartSLOduration=22.927831097 podStartE2EDuration="25.243757592s" podCreationTimestamp="2025-09-03 23:22:54 +0000 UTC" firstStartedPulling="2025-09-03 23:23:16.34174121 +0000 UTC m=+39.425498913" lastFinishedPulling="2025-09-03 23:23:18.657667705 +0000 UTC m=+41.741425408" observedRunningTime="2025-09-03 23:23:19.243354622 +0000 UTC m=+42.327112325" watchObservedRunningTime="2025-09-03 23:23:19.243757592 +0000 UTC m=+42.327515295" Sep 3 23:23:19.258926 kubelet[2633]: I0903 23:23:19.258475 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9654f7d87-p4m7t" podStartSLOduration=28.25845694 podStartE2EDuration="28.25845694s" podCreationTimestamp="2025-09-03 23:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:23:19.25845678 +0000 UTC m=+42.342214483" watchObservedRunningTime="2025-09-03 23:23:19.25845694 +0000 UTC m=+42.342214643" Sep 3 23:23:19.281478 containerd[1524]: time="2025-09-03T23:23:19.281430284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\" id:\"db44c4445f92a0c6f930e5a9d77bfea20e82221eee5ebd00c8f2c2d21c11d410\" pid:5043 exited_at:{seconds:1756941799 nanos:281145837}" Sep 3 23:23:19.725236 systemd-networkd[1437]: cali54fa56f5049: Gained IPv6LL Sep 3 23:23:20.235366 kubelet[2633]: I0903 23:23:20.235335 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:20.706061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435063302.mount: Deactivated successfully. Sep 3 23:23:21.184978 containerd[1524]: time="2025-09-03T23:23:21.184930294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:21.185889 containerd[1524]: time="2025-09-03T23:23:21.185811514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 3 23:23:21.190955 containerd[1524]: time="2025-09-03T23:23:21.190887229Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:21.191894 containerd[1524]: time="2025-09-03T23:23:21.191783929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.533898219s" Sep 3 23:23:21.191894 containerd[1524]: time="2025-09-03T23:23:21.191815770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 3 23:23:21.192314 containerd[1524]: time="2025-09-03T23:23:21.192288461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:21.195249 containerd[1524]: time="2025-09-03T23:23:21.194597113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 3 23:23:21.196081 containerd[1524]: time="2025-09-03T23:23:21.196053346Z" level=info msg="CreateContainer within sandbox \"ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 3 23:23:21.205331 containerd[1524]: time="2025-09-03T23:23:21.205278474Z" level=info msg="Container 2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:21.229496 containerd[1524]: time="2025-09-03T23:23:21.229445981Z" level=info msg="CreateContainer within sandbox \"ed35925f87167504433b897b71db6ceb7f2fe076abe2bbc6136cd72d4c6e5512\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\"" Sep 3 23:23:21.230038 containerd[1524]: time="2025-09-03T23:23:21.230012033Z" level=info msg="StartContainer for \"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\"" Sep 3 23:23:21.231481 containerd[1524]: time="2025-09-03T23:23:21.231420505Z" level=info msg="connecting to shim 2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd" address="unix:///run/containerd/s/7e9431e283b4c1732b7521d896dfdff18f5cd59ac7e87ac2289c82f1888dd2e7" protocol=ttrpc version=3 Sep 3 23:23:21.258323 systemd[1]: Started cri-containerd-2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd.scope - libcontainer container 2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd. Sep 3 23:23:21.302485 containerd[1524]: time="2025-09-03T23:23:21.302431550Z" level=info msg="StartContainer for \"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\" returns successfully" Sep 3 23:23:22.098607 containerd[1524]: time="2025-09-03T23:23:22.098320654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.098851 containerd[1524]: time="2025-09-03T23:23:22.098796984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 3 23:23:22.099879 containerd[1524]: time="2025-09-03T23:23:22.099822527Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.102799 containerd[1524]: time="2025-09-03T23:23:22.102741672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:22.103166 containerd[1524]: time="2025-09-03T23:23:22.103139440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 907.438182ms" Sep 3 23:23:22.103218 containerd[1524]: time="2025-09-03T23:23:22.103174321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 3 23:23:22.106971 containerd[1524]: time="2025-09-03T23:23:22.106926484Z" level=info msg="CreateContainer within sandbox \"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 3 23:23:22.119622 containerd[1524]: time="2025-09-03T23:23:22.118499060Z" level=info msg="Container 1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:22.124471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579798853.mount: Deactivated successfully. Sep 3 23:23:22.134701 containerd[1524]: time="2025-09-03T23:23:22.134595216Z" level=info msg="CreateContainer within sandbox \"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f\"" Sep 3 23:23:22.135472 containerd[1524]: time="2025-09-03T23:23:22.135381193Z" level=info msg="StartContainer for \"1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f\"" Sep 3 23:23:22.137524 containerd[1524]: time="2025-09-03T23:23:22.137476800Z" level=info msg="connecting to shim 1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f" address="unix:///run/containerd/s/b50ececf0ef1e3fca41bad31c5cc09e65db2bd819f3389d3cca6a6054d402755" protocol=ttrpc version=3 Sep 3 23:23:22.161327 systemd[1]: Started cri-containerd-1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f.scope - libcontainer container 1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f. Sep 3 23:23:22.201720 containerd[1524]: time="2025-09-03T23:23:22.201668299Z" level=info msg="StartContainer for \"1096da7fbd6d9149a70c3819219c6af91d74fd24034c0f09be5da12babe3748f\" returns successfully" Sep 3 23:23:22.202992 containerd[1524]: time="2025-09-03T23:23:22.202938767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 3 23:23:22.288419 kubelet[2633]: I0903 23:23:22.288329 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-ns4wm" podStartSLOduration=25.46062234 podStartE2EDuration="29.288309215s" podCreationTimestamp="2025-09-03 23:22:53 +0000 UTC" firstStartedPulling="2025-09-03 23:23:17.365881735 +0000 UTC m=+40.449639438" lastFinishedPulling="2025-09-03 23:23:21.19356857 +0000 UTC m=+44.277326313" observedRunningTime="2025-09-03 23:23:22.287817564 +0000 UTC m=+45.371575307" watchObservedRunningTime="2025-09-03 23:23:22.288309215 +0000 UTC m=+45.372066878" Sep 3 23:23:22.384276 containerd[1524]: time="2025-09-03T23:23:22.384169015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\" id:\"7960bd9c33d5c79df611e3c16a5a33dbf63e5d09170229d358fbf6e1e0f12659\" pid:5156 exit_status:1 exited_at:{seconds:1756941802 nanos:383763806}" Sep 3 23:23:22.961205 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:49062.service - OpenSSH per-connection server daemon (10.0.0.1:49062). Sep 3 23:23:23.041618 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 49062 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:23.042978 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:23.048036 systemd-logind[1496]: New session 9 of user core. Sep 3 23:23:23.055282 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:23:23.169139 containerd[1524]: time="2025-09-03T23:23:23.169063495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:23.169783 containerd[1524]: time="2025-09-03T23:23:23.169601987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 3 23:23:23.172363 containerd[1524]: time="2025-09-03T23:23:23.171094499Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:23.175229 containerd[1524]: time="2025-09-03T23:23:23.175184388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:23:23.177805 containerd[1524]: time="2025-09-03T23:23:23.177771404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 974.778236ms" Sep 3 23:23:23.178049 containerd[1524]: time="2025-09-03T23:23:23.177905967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 3 23:23:23.180630 containerd[1524]: time="2025-09-03T23:23:23.180589865Z" level=info msg="CreateContainer within sandbox \"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 3 23:23:23.192055 containerd[1524]: time="2025-09-03T23:23:23.191920510Z" level=info msg="Container 531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:23:23.202129 containerd[1524]: time="2025-09-03T23:23:23.201987448Z" level=info msg="CreateContainer within sandbox \"988afc4437797fffbd35128fae6cdbceb8573d5a87e7bd4c6e55d94df55d1135\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52\"" Sep 3 23:23:23.203932 containerd[1524]: time="2025-09-03T23:23:23.203375518Z" level=info msg="StartContainer for \"531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52\"" Sep 3 23:23:23.205456 containerd[1524]: time="2025-09-03T23:23:23.205423043Z" level=info msg="connecting to shim 531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52" address="unix:///run/containerd/s/b50ececf0ef1e3fca41bad31c5cc09e65db2bd819f3389d3cca6a6054d402755" protocol=ttrpc version=3 Sep 3 23:23:23.229356 systemd[1]: Started cri-containerd-531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52.scope - libcontainer container 531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52. Sep 3 23:23:23.304936 containerd[1524]: time="2025-09-03T23:23:23.304767314Z" level=info msg="StartContainer for \"531a399ea23b7ee47df68c07b162ce74787ab7afb28bc659b6365c283a50dd52\" returns successfully" Sep 3 23:23:23.325476 sshd[5175]: Connection closed by 10.0.0.1 port 49062 Sep 3 23:23:23.326404 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:23.329983 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:49062.service: Deactivated successfully. Sep 3 23:23:23.332638 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:23:23.337520 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:23:23.338601 systemd-logind[1496]: Removed session 9. Sep 3 23:23:23.366734 containerd[1524]: time="2025-09-03T23:23:23.366696815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\" id:\"75c942484a6ae095602d172e68c8682facf9b9250ee4c47e83c6492c93ebd748\" pid:5229 exit_status:1 exited_at:{seconds:1756941803 nanos:366223565}" Sep 3 23:23:24.084217 kubelet[2633]: I0903 23:23:24.084172 2633 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 3 23:23:24.084217 kubelet[2633]: I0903 23:23:24.084220 2633 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 3 23:23:24.293029 kubelet[2633]: I0903 23:23:24.292945 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zw6hs" podStartSLOduration=24.536656346 podStartE2EDuration="30.292927506s" podCreationTimestamp="2025-09-03 23:22:54 +0000 UTC" firstStartedPulling="2025-09-03 23:23:17.422650149 +0000 UTC m=+40.506407852" lastFinishedPulling="2025-09-03 23:23:23.178921309 +0000 UTC m=+46.262679012" observedRunningTime="2025-09-03 23:23:24.292708702 +0000 UTC m=+47.376466405" watchObservedRunningTime="2025-09-03 23:23:24.292927506 +0000 UTC m=+47.376685209" Sep 3 23:23:25.673340 kubelet[2633]: I0903 23:23:25.673302 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:23:28.339347 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:49064.service - OpenSSH per-connection server daemon (10.0.0.1:49064). Sep 3 23:23:28.399396 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 49064 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:28.401821 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.409889 systemd-logind[1496]: New session 10 of user core. Sep 3 23:23:28.414290 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:23:28.568809 sshd[5257]: Connection closed by 10.0.0.1 port 49064 Sep 3 23:23:28.570988 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:28.578287 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:49064.service: Deactivated successfully. Sep 3 23:23:28.580714 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:23:28.581489 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:23:28.584468 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:49066.service - OpenSSH per-connection server daemon (10.0.0.1:49066). Sep 3 23:23:28.585621 systemd-logind[1496]: Removed session 10. Sep 3 23:23:28.636544 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 49066 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:28.637873 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.643011 systemd-logind[1496]: New session 11 of user core. Sep 3 23:23:28.653306 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:23:28.839488 sshd[5274]: Connection closed by 10.0.0.1 port 49066 Sep 3 23:23:28.840800 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:28.850247 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:49066.service: Deactivated successfully. Sep 3 23:23:28.852940 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:23:28.854013 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:23:28.857061 systemd-logind[1496]: Removed session 11. Sep 3 23:23:28.858886 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:49080.service - OpenSSH per-connection server daemon (10.0.0.1:49080). Sep 3 23:23:28.912747 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 49080 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:28.913870 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:28.917866 systemd-logind[1496]: New session 12 of user core. Sep 3 23:23:28.924324 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:23:29.069440 sshd[5288]: Connection closed by 10.0.0.1 port 49080 Sep 3 23:23:29.070322 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:29.073718 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:49080.service: Deactivated successfully. Sep 3 23:23:29.075510 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:23:29.076178 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:23:29.079665 systemd-logind[1496]: Removed session 12. Sep 3 23:23:34.089150 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:59226.service - OpenSSH per-connection server daemon (10.0.0.1:59226). Sep 3 23:23:34.152667 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 59226 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:34.155025 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:34.159962 systemd-logind[1496]: New session 13 of user core. Sep 3 23:23:34.166318 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:23:34.391310 sshd[5314]: Connection closed by 10.0.0.1 port 59226 Sep 3 23:23:34.391674 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:34.405028 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:59226.service: Deactivated successfully. Sep 3 23:23:34.406817 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:23:34.407769 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:23:34.411024 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:59230.service - OpenSSH per-connection server daemon (10.0.0.1:59230). Sep 3 23:23:34.412017 systemd-logind[1496]: Removed session 13. Sep 3 23:23:34.473727 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:34.475199 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:34.479766 systemd-logind[1496]: New session 14 of user core. Sep 3 23:23:34.491298 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:23:34.712543 sshd[5329]: Connection closed by 10.0.0.1 port 59230 Sep 3 23:23:34.713071 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:34.722228 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:59230.service: Deactivated successfully. Sep 3 23:23:34.723888 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:23:34.724599 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:23:34.727093 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:59238.service - OpenSSH per-connection server daemon (10.0.0.1:59238). Sep 3 23:23:34.727799 systemd-logind[1496]: Removed session 14. Sep 3 23:23:34.784593 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 59238 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:34.786026 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:34.790553 systemd-logind[1496]: New session 15 of user core. Sep 3 23:23:34.800341 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:23:35.270576 containerd[1524]: time="2025-09-03T23:23:35.270531338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\" id:\"a43c89c205215c2c03528767a57022cf92bf05468f5eafb65ace772d80708f26\" pid:5367 exited_at:{seconds:1756941815 nanos:270197412}" Sep 3 23:23:35.947476 containerd[1524]: time="2025-09-03T23:23:35.947294316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8071d35cc5a8f8afdda222c92e81b01c45e314cb532baee91011f1a79fdd81b5\" id:\"fb87cdd6693ff9a4169fe426903f23c8d1b62818f7a10828a0221e50dea4bac2\" pid:5390 exited_at:{seconds:1756941815 nanos:947016271}" Sep 3 23:23:36.378906 sshd[5343]: Connection closed by 10.0.0.1 port 59238 Sep 3 23:23:36.379924 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:36.391000 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:59238.service: Deactivated successfully. Sep 3 23:23:36.393414 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:23:36.393612 systemd[1]: session-15.scope: Consumed 538ms CPU time, 71.4M memory peak. Sep 3 23:23:36.395170 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:23:36.398470 systemd-logind[1496]: Removed session 15. Sep 3 23:23:36.400338 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:59242.service - OpenSSH per-connection server daemon (10.0.0.1:59242). Sep 3 23:23:36.466543 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 59242 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:36.467708 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:36.471932 systemd-logind[1496]: New session 16 of user core. Sep 3 23:23:36.480286 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:23:36.745634 sshd[5412]: Connection closed by 10.0.0.1 port 59242 Sep 3 23:23:36.747288 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:36.755827 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:59242.service: Deactivated successfully. Sep 3 23:23:36.758347 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:23:36.759245 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:23:36.763745 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:59246.service - OpenSSH per-connection server daemon (10.0.0.1:59246). Sep 3 23:23:36.764929 systemd-logind[1496]: Removed session 16. Sep 3 23:23:36.823205 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 59246 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:36.824844 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:36.829472 systemd-logind[1496]: New session 17 of user core. Sep 3 23:23:36.841503 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:23:36.977120 sshd[5426]: Connection closed by 10.0.0.1 port 59246 Sep 3 23:23:36.977456 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:36.982020 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:23:36.982379 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:59246.service: Deactivated successfully. Sep 3 23:23:36.984318 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:23:36.986066 systemd-logind[1496]: Removed session 17. Sep 3 23:23:41.993836 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:49220.service - OpenSSH per-connection server daemon (10.0.0.1:49220). Sep 3 23:23:42.054450 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 49220 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:42.059622 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:42.066973 systemd-logind[1496]: New session 18 of user core. Sep 3 23:23:42.075313 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:23:42.202426 sshd[5450]: Connection closed by 10.0.0.1 port 49220 Sep 3 23:23:42.202972 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:42.206743 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:49220.service: Deactivated successfully. Sep 3 23:23:42.209658 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:23:42.210690 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:23:42.211778 systemd-logind[1496]: Removed session 18. Sep 3 23:23:43.529752 containerd[1524]: time="2025-09-03T23:23:43.529573585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ef406eb9787fd0bf4809acf77b77d83ee4e0ed12acc52b99a759ad4293a53cd\" id:\"1fd5a859313f2524950c1fe87e98c7deb75b0ba62f5445f776547ac7d5b6245f\" pid:5478 exited_at:{seconds:1756941823 nanos:529314701}" Sep 3 23:23:47.218813 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:49226.service - OpenSSH per-connection server daemon (10.0.0.1:49226). Sep 3 23:23:47.272545 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 49226 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:47.273995 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:47.279157 systemd-logind[1496]: New session 19 of user core. Sep 3 23:23:47.294493 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:23:47.452679 sshd[5498]: Connection closed by 10.0.0.1 port 49226 Sep 3 23:23:47.453246 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:47.456998 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:49226.service: Deactivated successfully. Sep 3 23:23:47.460489 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:23:47.462805 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:23:47.464137 systemd-logind[1496]: Removed session 19. Sep 3 23:23:47.503937 containerd[1524]: time="2025-09-03T23:23:47.503898610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0daa5c08dff54a3303a17cf27f62054f8a7048332fd715d1167b76e0121a4697\" id:\"43d2ffeeead9e5f5e710bd3bccacc4d8f078a3a56552df4b0dcfeb60cd4d8e97\" pid:5518 exited_at:{seconds:1756941827 nanos:503584653}" Sep 3 23:23:52.470941 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:47548.service - OpenSSH per-connection server daemon (10.0.0.1:47548). Sep 3 23:23:52.522534 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 47548 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:23:52.524127 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:23:52.528250 systemd-logind[1496]: New session 20 of user core. Sep 3 23:23:52.539361 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:23:52.675880 sshd[5541]: Connection closed by 10.0.0.1 port 47548 Sep 3 23:23:52.676325 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Sep 3 23:23:52.682508 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:47548.service: Deactivated successfully. Sep 3 23:23:52.684703 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:23:52.685909 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:23:52.688805 systemd-logind[1496]: Removed session 20.