Sep 3 23:25:37.770957 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 3 23:25:37.770977 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 3 22:04:24 -00 2025 Sep 3 23:25:37.770987 kernel: KASLR enabled Sep 3 23:25:37.770992 kernel: efi: EFI v2.7 by EDK II Sep 3 23:25:37.770998 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 3 23:25:37.771003 kernel: random: crng init done Sep 3 23:25:37.771010 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 3 23:25:37.771015 kernel: secureboot: Secure boot enabled Sep 3 23:25:37.771021 kernel: ACPI: Early table checksum verification disabled Sep 3 23:25:37.771028 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 3 23:25:37.771034 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 3 23:25:37.771039 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771045 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771051 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771057 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771065 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771071 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771077 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771082 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771088 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 3 23:25:37.771094 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 3 23:25:37.771100 kernel: ACPI: Use ACPI SPCR as default console: No Sep 3 23:25:37.771106 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:25:37.771112 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 3 23:25:37.771117 kernel: Zone ranges: Sep 3 23:25:37.771124 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:25:37.771130 kernel: DMA32 empty Sep 3 23:25:37.771136 kernel: Normal empty Sep 3 23:25:37.771142 kernel: Device empty Sep 3 23:25:37.771201 kernel: Movable zone start for each node Sep 3 23:25:37.771207 kernel: Early memory node ranges Sep 3 23:25:37.771213 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 3 23:25:37.771219 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 3 23:25:37.771225 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 3 23:25:37.771231 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 3 23:25:37.771236 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 3 23:25:37.771242 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 3 23:25:37.771250 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 3 23:25:37.771256 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 3 23:25:37.771262 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 3 23:25:37.771271 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 3 23:25:37.771277 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 3 23:25:37.771291 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 3 23:25:37.771297 kernel: psci: probing for conduit method from ACPI. Sep 3 23:25:37.771305 kernel: psci: PSCIv1.1 detected in firmware. Sep 3 23:25:37.771311 kernel: psci: Using standard PSCI v0.2 function IDs Sep 3 23:25:37.771317 kernel: psci: Trusted OS migration not required Sep 3 23:25:37.771324 kernel: psci: SMC Calling Convention v1.1 Sep 3 23:25:37.771330 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 3 23:25:37.771337 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 3 23:25:37.771343 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 3 23:25:37.771349 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 3 23:25:37.771355 kernel: Detected PIPT I-cache on CPU0 Sep 3 23:25:37.771363 kernel: CPU features: detected: GIC system register CPU interface Sep 3 23:25:37.771369 kernel: CPU features: detected: Spectre-v4 Sep 3 23:25:37.771375 kernel: CPU features: detected: Spectre-BHB Sep 3 23:25:37.771381 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 3 23:25:37.771388 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 3 23:25:37.771394 kernel: CPU features: detected: ARM erratum 1418040 Sep 3 23:25:37.771400 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 3 23:25:37.771406 kernel: alternatives: applying boot alternatives Sep 3 23:25:37.771414 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:25:37.771421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 3 23:25:37.771428 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 3 23:25:37.771435 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 3 23:25:37.771441 kernel: Fallback order for Node 0: 0 Sep 3 23:25:37.771448 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 3 23:25:37.771454 kernel: Policy zone: DMA Sep 3 23:25:37.771460 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 3 23:25:37.771466 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 3 23:25:37.771472 kernel: software IO TLB: area num 4. Sep 3 23:25:37.771478 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 3 23:25:37.771485 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 3 23:25:37.771491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 3 23:25:37.771497 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 3 23:25:37.771504 kernel: rcu: RCU event tracing is enabled. Sep 3 23:25:37.771512 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 3 23:25:37.771518 kernel: Trampoline variant of Tasks RCU enabled. Sep 3 23:25:37.771525 kernel: Tracing variant of Tasks RCU enabled. Sep 3 23:25:37.771531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 3 23:25:37.771537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 3 23:25:37.771544 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:25:37.771550 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 3 23:25:37.771557 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 3 23:25:37.771563 kernel: GICv3: 256 SPIs implemented Sep 3 23:25:37.771569 kernel: GICv3: 0 Extended SPIs implemented Sep 3 23:25:37.771575 kernel: Root IRQ handler: gic_handle_irq Sep 3 23:25:37.771583 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 3 23:25:37.771589 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 3 23:25:37.771595 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 3 23:25:37.771601 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 3 23:25:37.771608 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 3 23:25:37.771614 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 3 23:25:37.771620 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 3 23:25:37.771627 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 3 23:25:37.771633 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 3 23:25:37.771639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:25:37.771646 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 3 23:25:37.771652 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 3 23:25:37.771660 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 3 23:25:37.771666 kernel: arm-pv: using stolen time PV Sep 3 23:25:37.771673 kernel: Console: colour dummy device 80x25 Sep 3 23:25:37.771679 kernel: ACPI: Core revision 20240827 Sep 3 23:25:37.771686 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 3 23:25:37.771692 kernel: pid_max: default: 32768 minimum: 301 Sep 3 23:25:37.771699 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 3 23:25:37.771705 kernel: landlock: Up and running. Sep 3 23:25:37.771712 kernel: SELinux: Initializing. Sep 3 23:25:37.771720 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:25:37.771734 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 3 23:25:37.771741 kernel: rcu: Hierarchical SRCU implementation. Sep 3 23:25:37.771748 kernel: rcu: Max phase no-delay instances is 400. Sep 3 23:25:37.771754 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 3 23:25:37.771760 kernel: Remapping and enabling EFI services. Sep 3 23:25:37.771767 kernel: smp: Bringing up secondary CPUs ... Sep 3 23:25:37.771773 kernel: Detected PIPT I-cache on CPU1 Sep 3 23:25:37.771780 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 3 23:25:37.771788 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 3 23:25:37.771799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:25:37.771806 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 3 23:25:37.771814 kernel: Detected PIPT I-cache on CPU2 Sep 3 23:25:37.771821 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 3 23:25:37.771828 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 3 23:25:37.771835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:25:37.771842 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 3 23:25:37.771849 kernel: Detected PIPT I-cache on CPU3 Sep 3 23:25:37.771857 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 3 23:25:37.771864 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 3 23:25:37.771870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 3 23:25:37.771877 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 3 23:25:37.771884 kernel: smp: Brought up 1 node, 4 CPUs Sep 3 23:25:37.771891 kernel: SMP: Total of 4 processors activated. Sep 3 23:25:37.771897 kernel: CPU: All CPU(s) started at EL1 Sep 3 23:25:37.771904 kernel: CPU features: detected: 32-bit EL0 Support Sep 3 23:25:37.771911 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 3 23:25:37.771920 kernel: CPU features: detected: Common not Private translations Sep 3 23:25:37.771926 kernel: CPU features: detected: CRC32 instructions Sep 3 23:25:37.771933 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 3 23:25:37.771940 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 3 23:25:37.771947 kernel: CPU features: detected: LSE atomic instructions Sep 3 23:25:37.771953 kernel: CPU features: detected: Privileged Access Never Sep 3 23:25:37.771960 kernel: CPU features: detected: RAS Extension Support Sep 3 23:25:37.771967 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 3 23:25:37.771974 kernel: alternatives: applying system-wide alternatives Sep 3 23:25:37.771988 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 3 23:25:37.771996 kernel: Memory: 2422372K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 127580K reserved, 16384K cma-reserved) Sep 3 23:25:37.772005 kernel: devtmpfs: initialized Sep 3 23:25:37.772012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 3 23:25:37.772019 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 3 23:25:37.772026 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 3 23:25:37.772032 kernel: 0 pages in range for non-PLT usage Sep 3 23:25:37.772039 kernel: 508560 pages in range for PLT usage Sep 3 23:25:37.772046 kernel: pinctrl core: initialized pinctrl subsystem Sep 3 23:25:37.772054 kernel: SMBIOS 3.0.0 present. Sep 3 23:25:37.772061 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 3 23:25:37.772068 kernel: DMI: Memory slots populated: 1/1 Sep 3 23:25:37.772075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 3 23:25:37.772082 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 3 23:25:37.772089 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 3 23:25:37.772096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 3 23:25:37.772103 kernel: audit: initializing netlink subsys (disabled) Sep 3 23:25:37.772110 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 3 23:25:37.772118 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 3 23:25:37.772124 kernel: cpuidle: using governor menu Sep 3 23:25:37.772131 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 3 23:25:37.772138 kernel: ASID allocator initialised with 32768 entries Sep 3 23:25:37.772151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 3 23:25:37.772158 kernel: Serial: AMBA PL011 UART driver Sep 3 23:25:37.772165 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 3 23:25:37.772172 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 3 23:25:37.772179 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 3 23:25:37.772188 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 3 23:25:37.772194 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 3 23:25:37.772201 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 3 23:25:37.772208 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 3 23:25:37.772215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 3 23:25:37.772222 kernel: ACPI: Added _OSI(Module Device) Sep 3 23:25:37.772228 kernel: ACPI: Added _OSI(Processor Device) Sep 3 23:25:37.772235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 3 23:25:37.772242 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 3 23:25:37.772250 kernel: ACPI: Interpreter enabled Sep 3 23:25:37.772257 kernel: ACPI: Using GIC for interrupt routing Sep 3 23:25:37.772264 kernel: ACPI: MCFG table detected, 1 entries Sep 3 23:25:37.772270 kernel: ACPI: CPU0 has been hot-added Sep 3 23:25:37.772277 kernel: ACPI: CPU1 has been hot-added Sep 3 23:25:37.772288 kernel: ACPI: CPU2 has been hot-added Sep 3 23:25:37.772294 kernel: ACPI: CPU3 has been hot-added Sep 3 23:25:37.772301 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 3 23:25:37.772308 kernel: printk: legacy console [ttyAMA0] enabled Sep 3 23:25:37.772317 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 3 23:25:37.772449 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 3 23:25:37.772514 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 3 23:25:37.772574 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 3 23:25:37.772640 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 3 23:25:37.772699 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 3 23:25:37.772708 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 3 23:25:37.772717 kernel: PCI host bridge to bus 0000:00 Sep 3 23:25:37.772794 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 3 23:25:37.772851 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 3 23:25:37.772905 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 3 23:25:37.772960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 3 23:25:37.773036 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 3 23:25:37.773109 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 3 23:25:37.773206 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 3 23:25:37.773271 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 3 23:25:37.773342 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 3 23:25:37.773404 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 3 23:25:37.773464 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 3 23:25:37.773525 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 3 23:25:37.773580 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 3 23:25:37.773644 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 3 23:25:37.773697 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 3 23:25:37.773706 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 3 23:25:37.773713 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 3 23:25:37.773720 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 3 23:25:37.773734 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 3 23:25:37.773741 kernel: iommu: Default domain type: Translated Sep 3 23:25:37.773748 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 3 23:25:37.773757 kernel: efivars: Registered efivars operations Sep 3 23:25:37.773764 kernel: vgaarb: loaded Sep 3 23:25:37.773771 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 3 23:25:37.773778 kernel: VFS: Disk quotas dquot_6.6.0 Sep 3 23:25:37.773784 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 3 23:25:37.773791 kernel: pnp: PnP ACPI init Sep 3 23:25:37.773868 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 3 23:25:37.773878 kernel: pnp: PnP ACPI: found 1 devices Sep 3 23:25:37.773887 kernel: NET: Registered PF_INET protocol family Sep 3 23:25:37.773894 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 3 23:25:37.773901 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 3 23:25:37.773908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 3 23:25:37.773915 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 3 23:25:37.773922 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 3 23:25:37.773929 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 3 23:25:37.773935 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:25:37.773942 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 3 23:25:37.773951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 3 23:25:37.773957 kernel: PCI: CLS 0 bytes, default 64 Sep 3 23:25:37.773964 kernel: kvm [1]: HYP mode not available Sep 3 23:25:37.773971 kernel: Initialise system trusted keyrings Sep 3 23:25:37.773978 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 3 23:25:37.773984 kernel: Key type asymmetric registered Sep 3 23:25:37.773991 kernel: Asymmetric key parser 'x509' registered Sep 3 23:25:37.773998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 3 23:25:37.774005 kernel: io scheduler mq-deadline registered Sep 3 23:25:37.774013 kernel: io scheduler kyber registered Sep 3 23:25:37.774020 kernel: io scheduler bfq registered Sep 3 23:25:37.774032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 3 23:25:37.774039 kernel: ACPI: button: Power Button [PWRB] Sep 3 23:25:37.774046 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 3 23:25:37.774112 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 3 23:25:37.774121 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 3 23:25:37.774128 kernel: thunder_xcv, ver 1.0 Sep 3 23:25:37.774135 kernel: thunder_bgx, ver 1.0 Sep 3 23:25:37.774162 kernel: nicpf, ver 1.0 Sep 3 23:25:37.774170 kernel: nicvf, ver 1.0 Sep 3 23:25:37.774245 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 3 23:25:37.774310 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-03T23:25:37 UTC (1756941937) Sep 3 23:25:37.774319 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 3 23:25:37.774326 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 3 23:25:37.774333 kernel: watchdog: NMI not fully supported Sep 3 23:25:37.774340 kernel: watchdog: Hard watchdog permanently disabled Sep 3 23:25:37.774349 kernel: NET: Registered PF_INET6 protocol family Sep 3 23:25:37.774356 kernel: Segment Routing with IPv6 Sep 3 23:25:37.774363 kernel: In-situ OAM (IOAM) with IPv6 Sep 3 23:25:37.774370 kernel: NET: Registered PF_PACKET protocol family Sep 3 23:25:37.774377 kernel: Key type dns_resolver registered Sep 3 23:25:37.774384 kernel: registered taskstats version 1 Sep 3 23:25:37.774391 kernel: Loading compiled-in X.509 certificates Sep 3 23:25:37.774398 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 08fc774dab168e64ce30c382a4517d40e72c4744' Sep 3 23:25:37.774404 kernel: Demotion targets for Node 0: null Sep 3 23:25:37.774412 kernel: Key type .fscrypt registered Sep 3 23:25:37.774419 kernel: Key type fscrypt-provisioning registered Sep 3 23:25:37.774426 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 3 23:25:37.774433 kernel: ima: Allocated hash algorithm: sha1 Sep 3 23:25:37.774440 kernel: ima: No architecture policies found Sep 3 23:25:37.774447 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 3 23:25:37.774454 kernel: clk: Disabling unused clocks Sep 3 23:25:37.774461 kernel: PM: genpd: Disabling unused power domains Sep 3 23:25:37.774468 kernel: Warning: unable to open an initial console. Sep 3 23:25:37.774476 kernel: Freeing unused kernel memory: 38976K Sep 3 23:25:37.774483 kernel: Run /init as init process Sep 3 23:25:37.774489 kernel: with arguments: Sep 3 23:25:37.774496 kernel: /init Sep 3 23:25:37.774503 kernel: with environment: Sep 3 23:25:37.774509 kernel: HOME=/ Sep 3 23:25:37.774516 kernel: TERM=linux Sep 3 23:25:37.774523 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 3 23:25:37.774530 systemd[1]: Successfully made /usr/ read-only. Sep 3 23:25:37.774541 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:25:37.774549 systemd[1]: Detected virtualization kvm. Sep 3 23:25:37.774556 systemd[1]: Detected architecture arm64. Sep 3 23:25:37.774563 systemd[1]: Running in initrd. Sep 3 23:25:37.774570 systemd[1]: No hostname configured, using default hostname. Sep 3 23:25:37.774578 systemd[1]: Hostname set to . Sep 3 23:25:37.774585 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:25:37.774595 systemd[1]: Queued start job for default target initrd.target. Sep 3 23:25:37.774603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:25:37.774610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:25:37.774618 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 3 23:25:37.774626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:25:37.774634 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 3 23:25:37.774642 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 3 23:25:37.774652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 3 23:25:37.774660 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 3 23:25:37.774667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:25:37.774675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:25:37.774682 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:25:37.774689 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:25:37.774697 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:25:37.774704 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:25:37.774713 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:25:37.774720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:25:37.774736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 3 23:25:37.774743 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 3 23:25:37.774751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:25:37.774759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:25:37.774766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:25:37.774774 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:25:37.774781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 3 23:25:37.774791 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:25:37.774798 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 3 23:25:37.774806 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 3 23:25:37.774813 systemd[1]: Starting systemd-fsck-usr.service... Sep 3 23:25:37.774820 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:25:37.774828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:25:37.774835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:25:37.774843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:25:37.774852 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 3 23:25:37.774859 systemd[1]: Finished systemd-fsck-usr.service. Sep 3 23:25:37.774883 systemd-journald[246]: Collecting audit messages is disabled. Sep 3 23:25:37.774903 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:25:37.774911 systemd-journald[246]: Journal started Sep 3 23:25:37.774929 systemd-journald[246]: Runtime Journal (/run/log/journal/b8e9c52c363f4a74bfd57fc88a307efd) is 6M, max 48.5M, 42.4M free. Sep 3 23:25:37.780196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 3 23:25:37.767695 systemd-modules-load[247]: Inserted module 'overlay' Sep 3 23:25:37.781934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:25:37.783318 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 3 23:25:37.784864 kernel: Bridge firewalling registered Sep 3 23:25:37.784882 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:25:37.786799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:25:37.788943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:25:37.793663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 3 23:25:37.795260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:25:37.796828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:25:37.801627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:25:37.807959 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:25:37.809232 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 3 23:25:37.811210 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:25:37.812389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:25:37.815686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:25:37.817975 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:25:37.819864 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 3 23:25:37.834287 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=cb633bb0c889435b58a5c40c9c9bc9d5899ece5018569c9fa08f911265d3f18e Sep 3 23:25:37.847692 systemd-resolved[287]: Positive Trust Anchors: Sep 3 23:25:37.847712 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:25:37.847752 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:25:37.852579 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 3 23:25:37.853544 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:25:37.855603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:25:37.905161 kernel: SCSI subsystem initialized Sep 3 23:25:37.908170 kernel: Loading iSCSI transport class v2.0-870. Sep 3 23:25:37.916178 kernel: iscsi: registered transport (tcp) Sep 3 23:25:37.928173 kernel: iscsi: registered transport (qla4xxx) Sep 3 23:25:37.928227 kernel: QLogic iSCSI HBA Driver Sep 3 23:25:37.944885 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:25:37.966123 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:25:37.969485 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:25:38.011169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 3 23:25:38.013277 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 3 23:25:38.080174 kernel: raid6: neonx8 gen() 15634 MB/s Sep 3 23:25:38.097161 kernel: raid6: neonx4 gen() 15654 MB/s Sep 3 23:25:38.114166 kernel: raid6: neonx2 gen() 13163 MB/s Sep 3 23:25:38.131173 kernel: raid6: neonx1 gen() 10341 MB/s Sep 3 23:25:38.148168 kernel: raid6: int64x8 gen() 6807 MB/s Sep 3 23:25:38.165166 kernel: raid6: int64x4 gen() 7271 MB/s Sep 3 23:25:38.182164 kernel: raid6: int64x2 gen() 6024 MB/s Sep 3 23:25:38.199172 kernel: raid6: int64x1 gen() 5000 MB/s Sep 3 23:25:38.199196 kernel: raid6: using algorithm neonx4 gen() 15654 MB/s Sep 3 23:25:38.216173 kernel: raid6: .... xor() 12138 MB/s, rmw enabled Sep 3 23:25:38.216187 kernel: raid6: using neon recovery algorithm Sep 3 23:25:38.221419 kernel: xor: measuring software checksum speed Sep 3 23:25:38.221448 kernel: 8regs : 21613 MB/sec Sep 3 23:25:38.222537 kernel: 32regs : 21167 MB/sec Sep 3 23:25:38.222553 kernel: arm64_neon : 27561 MB/sec Sep 3 23:25:38.222562 kernel: xor: using function: arm64_neon (27561 MB/sec) Sep 3 23:25:38.275184 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 3 23:25:38.281763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:25:38.283994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:25:38.313594 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 3 23:25:38.317682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:25:38.319394 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 3 23:25:38.343046 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 3 23:25:38.365658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:25:38.367854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:25:38.420802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:25:38.423460 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 3 23:25:38.470196 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 3 23:25:38.474443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:25:38.489342 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 3 23:25:38.489477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 3 23:25:38.489488 kernel: GPT:9289727 != 19775487 Sep 3 23:25:38.489497 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 3 23:25:38.489506 kernel: GPT:9289727 != 19775487 Sep 3 23:25:38.474569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:25:38.492475 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 3 23:25:38.492493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:25:38.488494 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:25:38.492489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:25:38.522580 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 3 23:25:38.523829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:25:38.526346 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 3 23:25:38.539767 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 3 23:25:38.547412 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:25:38.553370 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 3 23:25:38.554458 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 3 23:25:38.556341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:25:38.558714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:25:38.560480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:25:38.562864 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 3 23:25:38.564659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 3 23:25:38.579308 disk-uuid[595]: Primary Header is updated. Sep 3 23:25:38.579308 disk-uuid[595]: Secondary Entries is updated. Sep 3 23:25:38.579308 disk-uuid[595]: Secondary Header is updated. Sep 3 23:25:38.580425 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:25:38.585168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:25:38.588176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:25:39.590176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 3 23:25:39.590895 disk-uuid[600]: The operation has completed successfully. Sep 3 23:25:39.620417 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 3 23:25:39.620523 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 3 23:25:39.645918 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 3 23:25:39.657964 sh[613]: Success Sep 3 23:25:39.670550 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 3 23:25:39.670592 kernel: device-mapper: uevent: version 1.0.3 Sep 3 23:25:39.670612 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 3 23:25:39.677167 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 3 23:25:39.702131 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 3 23:25:39.704658 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 3 23:25:39.726362 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 3 23:25:39.736465 kernel: BTRFS: device fsid e8b97e78-d30f-4a41-b431-d82f3afef949 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 3 23:25:39.736509 kernel: BTRFS info (device dm-0): first mount of filesystem e8b97e78-d30f-4a41-b431-d82f3afef949 Sep 3 23:25:39.736527 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:25:39.744161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 3 23:25:39.744195 kernel: BTRFS info (device dm-0): enabling free space tree Sep 3 23:25:39.745341 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 3 23:25:39.746428 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:25:39.747514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 3 23:25:39.748272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 3 23:25:39.750937 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 3 23:25:39.779162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 3 23:25:39.781620 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:25:39.781666 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:25:39.784522 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:25:39.784552 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:25:39.788194 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:25:39.789564 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 3 23:25:39.791373 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 3 23:25:39.853196 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:25:39.855769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:25:39.890882 systemd-networkd[800]: lo: Link UP Sep 3 23:25:39.890895 systemd-networkd[800]: lo: Gained carrier Sep 3 23:25:39.891790 systemd-networkd[800]: Enumeration completed Sep 3 23:25:39.892249 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:25:39.892252 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:25:39.895753 ignition[707]: Ignition 2.21.0 Sep 3 23:25:39.893358 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:25:39.895759 ignition[707]: Stage: fetch-offline Sep 3 23:25:39.893453 systemd-networkd[800]: eth0: Link UP Sep 3 23:25:39.895795 ignition[707]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:39.893544 systemd-networkd[800]: eth0: Gained carrier Sep 3 23:25:39.895803 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:39.893553 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:25:39.895961 ignition[707]: parsed url from cmdline: "" Sep 3 23:25:39.894321 systemd[1]: Reached target network.target - Network. Sep 3 23:25:39.895964 ignition[707]: no config URL provided Sep 3 23:25:39.895968 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Sep 3 23:25:39.895975 ignition[707]: no config at "/usr/lib/ignition/user.ign" Sep 3 23:25:39.895993 ignition[707]: op(1): [started] loading QEMU firmware config module Sep 3 23:25:39.895997 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 3 23:25:39.901622 ignition[707]: op(1): [finished] loading QEMU firmware config module Sep 3 23:25:39.916201 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:25:39.953778 ignition[707]: parsing config with SHA512: b1e473973ffbabf393df9098fc6ef4f1fcd32c21d32fc6f8084464fafb2bc74b974f24e752b4911bfcad949b161a05be6845e19d868ec59e098f2eeee596e566 Sep 3 23:25:39.959485 unknown[707]: fetched base config from "system" Sep 3 23:25:39.959499 unknown[707]: fetched user config from "qemu" Sep 3 23:25:39.959996 ignition[707]: fetch-offline: fetch-offline passed Sep 3 23:25:39.960056 ignition[707]: Ignition finished successfully Sep 3 23:25:39.962160 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:25:39.963864 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 3 23:25:39.964591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 3 23:25:40.003847 ignition[813]: Ignition 2.21.0 Sep 3 23:25:40.003862 ignition[813]: Stage: kargs Sep 3 23:25:40.003993 ignition[813]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:40.004003 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:40.007570 ignition[813]: kargs: kargs passed Sep 3 23:25:40.007626 ignition[813]: Ignition finished successfully Sep 3 23:25:40.010615 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 3 23:25:40.014263 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 3 23:25:40.035827 ignition[821]: Ignition 2.21.0 Sep 3 23:25:40.035844 ignition[821]: Stage: disks Sep 3 23:25:40.035993 ignition[821]: no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:40.036000 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:40.038666 ignition[821]: disks: disks passed Sep 3 23:25:40.038730 ignition[821]: Ignition finished successfully Sep 3 23:25:40.041070 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 3 23:25:40.042320 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 3 23:25:40.043186 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 3 23:25:40.044745 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:25:40.046203 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:25:40.047712 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:25:40.050069 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 3 23:25:40.075918 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 3 23:25:40.080025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 3 23:25:40.083595 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 3 23:25:40.146169 kernel: EXT4-fs (vda9): mounted filesystem d953e3b7-a0cb-45f7-b3a7-216a9a578dda r/w with ordered data mode. Quota mode: none. Sep 3 23:25:40.146427 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 3 23:25:40.147477 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 3 23:25:40.149606 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:25:40.151082 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 3 23:25:40.151930 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 3 23:25:40.151970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 3 23:25:40.151992 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:25:40.166370 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 3 23:25:40.168608 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 3 23:25:40.172528 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 3 23:25:40.172557 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:25:40.172568 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:25:40.177496 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:25:40.177518 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:25:40.179297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:25:40.206109 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 3 23:25:40.210182 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 3 23:25:40.213472 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 3 23:25:40.217317 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 3 23:25:40.281483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 3 23:25:40.283623 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 3 23:25:40.285119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 3 23:25:40.304213 kernel: BTRFS info (device vda6): last unmount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:25:40.329279 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 3 23:25:40.341259 ignition[953]: INFO : Ignition 2.21.0 Sep 3 23:25:40.341259 ignition[953]: INFO : Stage: mount Sep 3 23:25:40.342652 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:40.342652 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:40.342652 ignition[953]: INFO : mount: mount passed Sep 3 23:25:40.342652 ignition[953]: INFO : Ignition finished successfully Sep 3 23:25:40.345192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 3 23:25:40.346823 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 3 23:25:40.871659 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 3 23:25:40.873123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 3 23:25:40.898197 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 3 23:25:40.898244 kernel: BTRFS info (device vda6): first mount of filesystem f1885725-917a-44ef-9d71-3c4c588cc4f4 Sep 3 23:25:40.899836 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 3 23:25:40.902255 kernel: BTRFS info (device vda6): turning on async discard Sep 3 23:25:40.902298 kernel: BTRFS info (device vda6): enabling free space tree Sep 3 23:25:40.903635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 3 23:25:40.942541 ignition[983]: INFO : Ignition 2.21.0 Sep 3 23:25:40.942541 ignition[983]: INFO : Stage: files Sep 3 23:25:40.944678 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:40.944678 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:40.947261 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 3 23:25:40.947261 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 3 23:25:40.947261 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 3 23:25:40.950883 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 3 23:25:40.950883 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 3 23:25:40.950883 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 3 23:25:40.950046 unknown[983]: wrote ssh authorized keys file for user: core Sep 3 23:25:40.955179 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:25:40.955179 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 3 23:25:41.079786 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 3 23:25:41.221292 systemd-networkd[800]: eth0: Gained IPv6LL Sep 3 23:25:41.476949 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 3 23:25:41.476949 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:25:41.480304 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 3 23:25:41.491320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:25:41.491320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 3 23:25:41.491320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:25:41.491320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:25:41.497901 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:25:41.497901 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 3 23:25:42.036538 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 3 23:25:42.450465 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 3 23:25:42.450465 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 3 23:25:42.454377 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:25:42.458271 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 3 23:25:42.458271 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 3 23:25:42.460924 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 3 23:25:42.460924 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:25:42.460924 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 3 23:25:42.460924 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 3 23:25:42.460924 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 3 23:25:42.478731 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:25:42.483209 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 3 23:25:42.483209 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 3 23:25:42.486266 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 3 23:25:42.486266 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 3 23:25:42.486266 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:25:42.486266 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 3 23:25:42.486266 ignition[983]: INFO : files: files passed Sep 3 23:25:42.486266 ignition[983]: INFO : Ignition finished successfully Sep 3 23:25:42.488014 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 3 23:25:42.492513 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 3 23:25:42.500735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 3 23:25:42.517031 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 3 23:25:42.517125 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 3 23:25:42.521400 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Sep 3 23:25:42.522662 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:25:42.525739 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:25:42.525739 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 3 23:25:42.529195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:25:42.530338 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 3 23:25:42.534904 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 3 23:25:42.567427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 3 23:25:42.567525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 3 23:25:42.570843 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 3 23:25:42.572429 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 3 23:25:42.574532 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 3 23:25:42.579370 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 3 23:25:42.598895 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:25:42.603849 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 3 23:25:42.630503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:25:42.631451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:25:42.633052 systemd[1]: Stopped target timers.target - Timer Units. Sep 3 23:25:42.634488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 3 23:25:42.634589 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 3 23:25:42.636630 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 3 23:25:42.638124 systemd[1]: Stopped target basic.target - Basic System. Sep 3 23:25:42.639417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 3 23:25:42.640745 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 3 23:25:42.642268 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 3 23:25:42.643776 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 3 23:25:42.645207 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 3 23:25:42.646774 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 3 23:25:42.648224 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 3 23:25:42.649937 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 3 23:25:42.651269 systemd[1]: Stopped target swap.target - Swaps. Sep 3 23:25:42.652408 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 3 23:25:42.652509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 3 23:25:42.654465 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:25:42.655985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:25:42.657474 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 3 23:25:42.658907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:25:42.659946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 3 23:25:42.660043 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 3 23:25:42.662247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 3 23:25:42.662349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 3 23:25:42.663884 systemd[1]: Stopped target paths.target - Path Units. Sep 3 23:25:42.665073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 3 23:25:42.668213 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:25:42.669275 systemd[1]: Stopped target slices.target - Slice Units. Sep 3 23:25:42.670899 systemd[1]: Stopped target sockets.target - Socket Units. Sep 3 23:25:42.672090 systemd[1]: iscsid.socket: Deactivated successfully. Sep 3 23:25:42.672175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 3 23:25:42.673502 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 3 23:25:42.673571 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 3 23:25:42.674823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 3 23:25:42.674924 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 3 23:25:42.676259 systemd[1]: ignition-files.service: Deactivated successfully. Sep 3 23:25:42.676347 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 3 23:25:42.678200 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 3 23:25:42.680278 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 3 23:25:42.681188 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 3 23:25:42.681291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:25:42.682788 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 3 23:25:42.682877 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 3 23:25:42.689114 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 3 23:25:42.689209 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 3 23:25:42.698806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 3 23:25:42.736392 ignition[1039]: INFO : Ignition 2.21.0 Sep 3 23:25:42.736392 ignition[1039]: INFO : Stage: umount Sep 3 23:25:42.738661 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 3 23:25:42.738661 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 3 23:25:42.738661 ignition[1039]: INFO : umount: umount passed Sep 3 23:25:42.738661 ignition[1039]: INFO : Ignition finished successfully Sep 3 23:25:42.738854 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 3 23:25:42.738964 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 3 23:25:42.740300 systemd[1]: Stopped target network.target - Network. Sep 3 23:25:42.743455 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 3 23:25:42.743515 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 3 23:25:42.744492 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 3 23:25:42.744541 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 3 23:25:42.745778 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 3 23:25:42.745823 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 3 23:25:42.747189 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 3 23:25:42.747226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 3 23:25:42.748729 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 3 23:25:42.751325 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 3 23:25:42.753587 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 3 23:25:42.753680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 3 23:25:42.754795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 3 23:25:42.754836 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 3 23:25:42.757070 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 3 23:25:42.757176 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 3 23:25:42.760099 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 3 23:25:42.760633 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 3 23:25:42.760732 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:25:42.764088 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 3 23:25:42.764284 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 3 23:25:42.764367 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 3 23:25:42.768966 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 3 23:25:42.769384 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 3 23:25:42.770411 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 3 23:25:42.770445 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:25:42.772626 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 3 23:25:42.773456 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 3 23:25:42.773504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 3 23:25:42.775903 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 3 23:25:42.775942 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:25:42.778382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 3 23:25:42.778420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 3 23:25:42.779962 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:25:42.783613 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 3 23:25:42.790838 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 3 23:25:42.790921 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 3 23:25:42.792671 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 3 23:25:42.792792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:25:42.794623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 3 23:25:42.794675 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 3 23:25:42.795557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 3 23:25:42.795589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:25:42.796920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 3 23:25:42.796958 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 3 23:25:42.799277 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 3 23:25:42.799317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 3 23:25:42.801478 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 3 23:25:42.801523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 3 23:25:42.804465 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 3 23:25:42.805275 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 3 23:25:42.805322 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:25:42.807519 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 3 23:25:42.807558 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:25:42.809970 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 3 23:25:42.810007 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:25:42.812517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 3 23:25:42.812553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:25:42.814398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 3 23:25:42.814434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:25:42.827419 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 3 23:25:42.827516 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 3 23:25:42.829211 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 3 23:25:42.831212 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 3 23:25:42.838808 systemd[1]: Switching root. Sep 3 23:25:42.875059 systemd-journald[246]: Journal stopped Sep 3 23:25:43.608317 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Sep 3 23:25:43.608366 kernel: SELinux: policy capability network_peer_controls=1 Sep 3 23:25:43.608381 kernel: SELinux: policy capability open_perms=1 Sep 3 23:25:43.608391 kernel: SELinux: policy capability extended_socket_class=1 Sep 3 23:25:43.608407 kernel: SELinux: policy capability always_check_network=0 Sep 3 23:25:43.608420 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 3 23:25:43.608430 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 3 23:25:43.608440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 3 23:25:43.608448 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 3 23:25:43.608457 kernel: SELinux: policy capability userspace_initial_context=0 Sep 3 23:25:43.608466 kernel: audit: type=1403 audit(1756941943.049:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 3 23:25:43.608476 systemd[1]: Successfully loaded SELinux policy in 48.408ms. Sep 3 23:25:43.608496 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.427ms. Sep 3 23:25:43.608506 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 3 23:25:43.608522 systemd[1]: Detected virtualization kvm. Sep 3 23:25:43.608534 systemd[1]: Detected architecture arm64. Sep 3 23:25:43.608545 systemd[1]: Detected first boot. Sep 3 23:25:43.608555 systemd[1]: Initializing machine ID from VM UUID. Sep 3 23:25:43.608566 zram_generator::config[1083]: No configuration found. Sep 3 23:25:43.608576 kernel: NET: Registered PF_VSOCK protocol family Sep 3 23:25:43.608585 systemd[1]: Populated /etc with preset unit settings. Sep 3 23:25:43.608595 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 3 23:25:43.608605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 3 23:25:43.608623 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 3 23:25:43.608633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 3 23:25:43.608643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 3 23:25:43.608654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 3 23:25:43.608665 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 3 23:25:43.608680 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 3 23:25:43.608690 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 3 23:25:43.608700 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 3 23:25:43.608723 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 3 23:25:43.608735 systemd[1]: Created slice user.slice - User and Session Slice. Sep 3 23:25:43.608745 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 3 23:25:43.608755 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 3 23:25:43.608765 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 3 23:25:43.608775 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 3 23:25:43.608785 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 3 23:25:43.608795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 3 23:25:43.608805 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 3 23:25:43.608816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 3 23:25:43.608826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 3 23:25:43.608836 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 3 23:25:43.608845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 3 23:25:43.608855 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 3 23:25:43.608864 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 3 23:25:43.608874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 3 23:25:43.608886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 3 23:25:43.608897 systemd[1]: Reached target slices.target - Slice Units. Sep 3 23:25:43.608907 systemd[1]: Reached target swap.target - Swaps. Sep 3 23:25:43.608917 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 3 23:25:43.608927 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 3 23:25:43.608937 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 3 23:25:43.608946 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 3 23:25:43.608956 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 3 23:25:43.608965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 3 23:25:43.608975 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 3 23:25:43.608987 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 3 23:25:43.608996 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 3 23:25:43.609006 systemd[1]: Mounting media.mount - External Media Directory... Sep 3 23:25:43.609016 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 3 23:25:43.609025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 3 23:25:43.609035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 3 23:25:43.609045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 3 23:25:43.609055 systemd[1]: Reached target machines.target - Containers. Sep 3 23:25:43.609064 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 3 23:25:43.609076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:25:43.609086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 3 23:25:43.609097 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 3 23:25:43.609106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:25:43.609116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:25:43.609126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:25:43.609135 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 3 23:25:43.609154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:25:43.609166 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 3 23:25:43.609176 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 3 23:25:43.609186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 3 23:25:43.609196 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 3 23:25:43.609205 systemd[1]: Stopped systemd-fsck-usr.service. Sep 3 23:25:43.609215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:25:43.609225 kernel: fuse: init (API version 7.41) Sep 3 23:25:43.609235 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 3 23:25:43.609244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 3 23:25:43.609255 kernel: loop: module loaded Sep 3 23:25:43.609264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 3 23:25:43.609274 kernel: ACPI: bus type drm_connector registered Sep 3 23:25:43.609283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 3 23:25:43.609293 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 3 23:25:43.609302 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 3 23:25:43.609315 systemd[1]: verity-setup.service: Deactivated successfully. Sep 3 23:25:43.609324 systemd[1]: Stopped verity-setup.service. Sep 3 23:25:43.609334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 3 23:25:43.609364 systemd-journald[1155]: Collecting audit messages is disabled. Sep 3 23:25:43.609386 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 3 23:25:43.609397 systemd-journald[1155]: Journal started Sep 3 23:25:43.609417 systemd-journald[1155]: Runtime Journal (/run/log/journal/b8e9c52c363f4a74bfd57fc88a307efd) is 6M, max 48.5M, 42.4M free. Sep 3 23:25:43.408312 systemd[1]: Queued start job for default target multi-user.target. Sep 3 23:25:43.433955 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 3 23:25:43.434331 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 3 23:25:43.611209 systemd[1]: Started systemd-journald.service - Journal Service. Sep 3 23:25:43.611779 systemd[1]: Mounted media.mount - External Media Directory. Sep 3 23:25:43.612676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 3 23:25:43.613629 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 3 23:25:43.614567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 3 23:25:43.616178 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 3 23:25:43.617388 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 3 23:25:43.618591 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 3 23:25:43.618844 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 3 23:25:43.620120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:25:43.622182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:25:43.623241 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:25:43.623380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:25:43.624371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:25:43.624512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:25:43.625649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 3 23:25:43.625821 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 3 23:25:43.627103 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:25:43.627470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:25:43.628536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 3 23:25:43.629644 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 3 23:25:43.630952 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 3 23:25:43.632456 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 3 23:25:43.644022 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 3 23:25:43.646208 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 3 23:25:43.647992 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 3 23:25:43.648944 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 3 23:25:43.648973 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 3 23:25:43.650644 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 3 23:25:43.659904 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 3 23:25:43.661010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:25:43.662262 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 3 23:25:43.664072 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 3 23:25:43.665195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:25:43.668635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 3 23:25:43.669531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:25:43.670393 systemd-journald[1155]: Time spent on flushing to /var/log/journal/b8e9c52c363f4a74bfd57fc88a307efd is 12.543ms for 882 entries. Sep 3 23:25:43.670393 systemd-journald[1155]: System Journal (/var/log/journal/b8e9c52c363f4a74bfd57fc88a307efd) is 8M, max 195.6M, 187.6M free. Sep 3 23:25:43.690215 systemd-journald[1155]: Received client request to flush runtime journal. Sep 3 23:25:43.670622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 3 23:25:43.673204 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 3 23:25:43.678341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 3 23:25:43.683308 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 3 23:25:43.684474 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 3 23:25:43.685450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 3 23:25:43.688047 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 3 23:25:43.693358 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 3 23:25:43.698349 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 3 23:25:43.699761 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 3 23:25:43.704179 kernel: loop0: detected capacity change from 0 to 207008 Sep 3 23:25:43.708899 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 3 23:25:43.709186 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 3 23:25:43.709374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 3 23:25:43.714205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 3 23:25:43.715344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 3 23:25:43.718286 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 3 23:25:43.734550 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 3 23:25:43.744164 kernel: loop1: detected capacity change from 0 to 138376 Sep 3 23:25:43.746032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 3 23:25:43.752303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 3 23:25:43.771339 kernel: loop2: detected capacity change from 0 to 107312 Sep 3 23:25:43.775390 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 3 23:25:43.775407 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 3 23:25:43.779124 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 3 23:25:43.807201 kernel: loop3: detected capacity change from 0 to 207008 Sep 3 23:25:43.814208 kernel: loop4: detected capacity change from 0 to 138376 Sep 3 23:25:43.822164 kernel: loop5: detected capacity change from 0 to 107312 Sep 3 23:25:43.826829 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 3 23:25:43.827558 (sd-merge)[1225]: Merged extensions into '/usr'. Sep 3 23:25:43.831237 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 3 23:25:43.831251 systemd[1]: Reloading... Sep 3 23:25:43.890172 zram_generator::config[1254]: No configuration found. Sep 3 23:25:43.934574 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 3 23:25:43.957069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:25:44.019451 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 3 23:25:44.019800 systemd[1]: Reloading finished in 188 ms. Sep 3 23:25:44.040599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 3 23:25:44.041844 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 3 23:25:44.053565 systemd[1]: Starting ensure-sysext.service... Sep 3 23:25:44.055190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 3 23:25:44.070195 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 3 23:25:44.070227 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 3 23:25:44.070444 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 3 23:25:44.070629 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 3 23:25:44.071269 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 3 23:25:44.071466 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 3 23:25:44.071518 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Sep 3 23:25:44.071833 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 3 23:25:44.071847 systemd[1]: Reloading... Sep 3 23:25:44.074047 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:25:44.074060 systemd-tmpfiles[1288]: Skipping /boot Sep 3 23:25:44.082768 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Sep 3 23:25:44.082781 systemd-tmpfiles[1288]: Skipping /boot Sep 3 23:25:44.113165 zram_generator::config[1315]: No configuration found. Sep 3 23:25:44.181113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:25:44.243314 systemd[1]: Reloading finished in 171 ms. Sep 3 23:25:44.264464 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 3 23:25:44.281361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 3 23:25:44.287960 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:25:44.290121 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 3 23:25:44.295477 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 3 23:25:44.297896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 3 23:25:44.301285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 3 23:25:44.303598 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 3 23:25:44.308089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:25:44.312851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:25:44.315433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:25:44.318615 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:25:44.319584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:25:44.319688 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:25:44.325294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 3 23:25:44.329839 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:25:44.329980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:25:44.333869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:25:44.334041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:25:44.336089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:25:44.336243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:25:44.337949 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 3 23:25:44.341214 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 3 23:25:44.342843 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 3 23:25:44.347822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 3 23:25:44.348822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 3 23:25:44.350115 augenrules[1386]: No rules Sep 3 23:25:44.350754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 3 23:25:44.354296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 3 23:25:44.363186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 3 23:25:44.364011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 3 23:25:44.364044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 3 23:25:44.366379 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 3 23:25:44.369822 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 3 23:25:44.370746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 3 23:25:44.371059 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 3 23:25:44.372965 systemd[1]: Finished ensure-sysext.service. Sep 3 23:25:44.376501 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:25:44.376713 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:25:44.378458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 3 23:25:44.378588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 3 23:25:44.379922 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 3 23:25:44.380054 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 3 23:25:44.382602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 3 23:25:44.386599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 3 23:25:44.388554 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 3 23:25:44.388700 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 3 23:25:44.390576 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 3 23:25:44.405295 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 3 23:25:44.407214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 3 23:25:44.407277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 3 23:25:44.410780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 3 23:25:44.413221 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 3 23:25:44.454646 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 3 23:25:44.457390 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 3 23:25:44.479200 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 3 23:25:44.501543 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 3 23:25:44.539452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 3 23:25:44.603607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 3 23:25:44.614718 systemd-networkd[1430]: lo: Link UP Sep 3 23:25:44.614725 systemd-networkd[1430]: lo: Gained carrier Sep 3 23:25:44.615465 systemd-networkd[1430]: Enumeration completed Sep 3 23:25:44.615555 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 3 23:25:44.615845 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:25:44.615848 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 3 23:25:44.616327 systemd-networkd[1430]: eth0: Link UP Sep 3 23:25:44.616417 systemd-networkd[1430]: eth0: Gained carrier Sep 3 23:25:44.616429 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 3 23:25:44.621014 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 3 23:25:44.623817 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 3 23:25:44.631184 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 3 23:25:44.636961 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 3 23:25:44.637664 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 3 23:25:44.637717 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-09-03 23:25:44.803267 UTC. Sep 3 23:25:44.638303 systemd[1]: Reached target time-set.target - System Time Set. Sep 3 23:25:44.642096 systemd-resolved[1354]: Positive Trust Anchors: Sep 3 23:25:44.642114 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 3 23:25:44.642115 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 3 23:25:44.642170 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 3 23:25:44.647622 systemd-resolved[1354]: Defaulting to hostname 'linux'. Sep 3 23:25:44.649168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 3 23:25:44.650033 systemd[1]: Reached target network.target - Network. Sep 3 23:25:44.650777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 3 23:25:44.651687 systemd[1]: Reached target sysinit.target - System Initialization. Sep 3 23:25:44.652695 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 3 23:25:44.653664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 3 23:25:44.654775 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 3 23:25:44.655691 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 3 23:25:44.656679 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 3 23:25:44.657628 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 3 23:25:44.657658 systemd[1]: Reached target paths.target - Path Units. Sep 3 23:25:44.658337 systemd[1]: Reached target timers.target - Timer Units. Sep 3 23:25:44.659919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 3 23:25:44.661885 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 3 23:25:44.664760 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 3 23:25:44.665936 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 3 23:25:44.666969 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 3 23:25:44.669638 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 3 23:25:44.670809 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 3 23:25:44.672266 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 3 23:25:44.673129 systemd[1]: Reached target sockets.target - Socket Units. Sep 3 23:25:44.673852 systemd[1]: Reached target basic.target - Basic System. Sep 3 23:25:44.674613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:25:44.674643 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 3 23:25:44.675474 systemd[1]: Starting containerd.service - containerd container runtime... Sep 3 23:25:44.677109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 3 23:25:44.678775 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 3 23:25:44.680503 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 3 23:25:44.682106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 3 23:25:44.683058 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 3 23:25:44.683941 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 3 23:25:44.685574 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 3 23:25:44.686720 jq[1480]: false Sep 3 23:25:44.688291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 3 23:25:44.691379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 3 23:25:44.694094 extend-filesystems[1481]: Found /dev/vda6 Sep 3 23:25:44.694302 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 3 23:25:44.696524 extend-filesystems[1481]: Found /dev/vda9 Sep 3 23:25:44.698002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 3 23:25:44.698366 extend-filesystems[1481]: Checking size of /dev/vda9 Sep 3 23:25:44.700424 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 3 23:25:44.701049 systemd[1]: Starting update-engine.service - Update Engine... Sep 3 23:25:44.703284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 3 23:25:44.705363 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 3 23:25:44.708513 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 3 23:25:44.708675 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 3 23:25:44.708910 systemd[1]: motdgen.service: Deactivated successfully. Sep 3 23:25:44.709065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 3 23:25:44.710875 jq[1500]: true Sep 3 23:25:44.712558 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 3 23:25:44.712741 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 3 23:25:44.712810 extend-filesystems[1481]: Resized partition /dev/vda9 Sep 3 23:25:44.719424 extend-filesystems[1508]: resize2fs 1.47.2 (1-Jan-2025) Sep 3 23:25:44.722163 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 3 23:25:44.723205 update_engine[1499]: I20250903 23:25:44.722561 1499 main.cc:92] Flatcar Update Engine starting Sep 3 23:25:44.738872 jq[1509]: true Sep 3 23:25:44.742461 tar[1506]: linux-arm64/LICENSE Sep 3 23:25:44.742772 tar[1506]: linux-arm64/helm Sep 3 23:25:44.744187 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 3 23:25:44.750802 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 3 23:25:44.759178 update_engine[1499]: I20250903 23:25:44.753118 1499 update_check_scheduler.cc:74] Next update check in 9m50s Sep 3 23:25:44.750611 dbus-daemon[1478]: [system] SELinux support is enabled Sep 3 23:25:44.752216 (ntainerd)[1519]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 3 23:25:44.754606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 3 23:25:44.754630 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 3 23:25:44.755845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 3 23:25:44.755862 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 3 23:25:44.759289 systemd[1]: Started update-engine.service - Update Engine. Sep 3 23:25:44.761176 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 3 23:25:44.761176 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 3 23:25:44.761176 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 3 23:25:44.764603 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Sep 3 23:25:44.761508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 3 23:25:44.763283 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 3 23:25:44.765187 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 3 23:25:44.784910 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (Power Button) Sep 3 23:25:44.785542 systemd-logind[1491]: New seat seat0. Sep 3 23:25:44.786440 systemd[1]: Started systemd-logind.service - User Login Management. Sep 3 23:25:44.797029 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Sep 3 23:25:44.801843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 3 23:25:44.806765 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 3 23:25:44.835734 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 3 23:25:44.927576 containerd[1519]: time="2025-09-03T23:25:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 3 23:25:44.930154 containerd[1519]: time="2025-09-03T23:25:44.928951920Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 3 23:25:44.941906 containerd[1519]: time="2025-09-03T23:25:44.941866360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.8µs" Sep 3 23:25:44.941906 containerd[1519]: time="2025-09-03T23:25:44.941898240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 3 23:25:44.941992 containerd[1519]: time="2025-09-03T23:25:44.941916000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 3 23:25:44.942069 containerd[1519]: time="2025-09-03T23:25:44.942047800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 3 23:25:44.942094 containerd[1519]: time="2025-09-03T23:25:44.942067600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 3 23:25:44.942094 containerd[1519]: time="2025-09-03T23:25:44.942088920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942158 containerd[1519]: time="2025-09-03T23:25:44.942133400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942184 containerd[1519]: time="2025-09-03T23:25:44.942163360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942375 containerd[1519]: time="2025-09-03T23:25:44.942352160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942375 containerd[1519]: time="2025-09-03T23:25:44.942370840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942415 containerd[1519]: time="2025-09-03T23:25:44.942382640Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942415 containerd[1519]: time="2025-09-03T23:25:44.942390640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942472 containerd[1519]: time="2025-09-03T23:25:44.942456200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942655 containerd[1519]: time="2025-09-03T23:25:44.942630000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942684 containerd[1519]: time="2025-09-03T23:25:44.942666760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 3 23:25:44.942684 containerd[1519]: time="2025-09-03T23:25:44.942676560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 3 23:25:44.942731 containerd[1519]: time="2025-09-03T23:25:44.942713600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 3 23:25:44.942934 containerd[1519]: time="2025-09-03T23:25:44.942914680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 3 23:25:44.942994 containerd[1519]: time="2025-09-03T23:25:44.942973560Z" level=info msg="metadata content store policy set" policy=shared Sep 3 23:25:44.946373 containerd[1519]: time="2025-09-03T23:25:44.946343680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 3 23:25:44.947533 containerd[1519]: time="2025-09-03T23:25:44.947477080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 3 23:25:44.947570 containerd[1519]: time="2025-09-03T23:25:44.947559320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 3 23:25:44.947598 containerd[1519]: time="2025-09-03T23:25:44.947580320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 3 23:25:44.947621 containerd[1519]: time="2025-09-03T23:25:44.947597680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 3 23:25:44.947621 containerd[1519]: time="2025-09-03T23:25:44.947609920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 3 23:25:44.947653 containerd[1519]: time="2025-09-03T23:25:44.947635920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 3 23:25:44.947670 containerd[1519]: time="2025-09-03T23:25:44.947653280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 3 23:25:44.947686 containerd[1519]: time="2025-09-03T23:25:44.947667440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 3 23:25:44.947686 containerd[1519]: time="2025-09-03T23:25:44.947681160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 3 23:25:44.947728 containerd[1519]: time="2025-09-03T23:25:44.947691520Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 3 23:25:44.947728 containerd[1519]: time="2025-09-03T23:25:44.947714880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 3 23:25:44.947851 containerd[1519]: time="2025-09-03T23:25:44.947829720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 3 23:25:44.947876 containerd[1519]: time="2025-09-03T23:25:44.947859480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 3 23:25:44.947893 containerd[1519]: time="2025-09-03T23:25:44.947879120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 3 23:25:44.947910 containerd[1519]: time="2025-09-03T23:25:44.947891720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 3 23:25:44.947927 containerd[1519]: time="2025-09-03T23:25:44.947906040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 3 23:25:44.947927 containerd[1519]: time="2025-09-03T23:25:44.947919760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 3 23:25:44.948032 containerd[1519]: time="2025-09-03T23:25:44.947993600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 3 23:25:44.948060 containerd[1519]: time="2025-09-03T23:25:44.948041880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 3 23:25:44.948079 containerd[1519]: time="2025-09-03T23:25:44.948059160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 3 23:25:44.948099 containerd[1519]: time="2025-09-03T23:25:44.948075800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 3 23:25:44.948117 containerd[1519]: time="2025-09-03T23:25:44.948095400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 3 23:25:44.948329 containerd[1519]: time="2025-09-03T23:25:44.948310080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 3 23:25:44.948357 containerd[1519]: time="2025-09-03T23:25:44.948334800Z" level=info msg="Start snapshots syncer" Sep 3 23:25:44.948375 containerd[1519]: time="2025-09-03T23:25:44.948364480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 3 23:25:44.948688 containerd[1519]: time="2025-09-03T23:25:44.948645200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 3 23:25:44.948790 containerd[1519]: time="2025-09-03T23:25:44.948711720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 3 23:25:44.948827 containerd[1519]: time="2025-09-03T23:25:44.948802920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 3 23:25:44.948946 containerd[1519]: time="2025-09-03T23:25:44.948924640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 3 23:25:44.948970 containerd[1519]: time="2025-09-03T23:25:44.948963120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 3 23:25:44.948989 containerd[1519]: time="2025-09-03T23:25:44.948978440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 3 23:25:44.949006 containerd[1519]: time="2025-09-03T23:25:44.948990000Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 3 23:25:44.949028 containerd[1519]: time="2025-09-03T23:25:44.949006520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 3 23:25:44.949028 containerd[1519]: time="2025-09-03T23:25:44.949021760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 3 23:25:44.949061 containerd[1519]: time="2025-09-03T23:25:44.949035920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 3 23:25:44.949078 containerd[1519]: time="2025-09-03T23:25:44.949065520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 3 23:25:44.949096 containerd[1519]: time="2025-09-03T23:25:44.949080880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 3 23:25:44.949112 containerd[1519]: time="2025-09-03T23:25:44.949095840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 3 23:25:44.949318 containerd[1519]: time="2025-09-03T23:25:44.949137560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:25:44.949362 containerd[1519]: time="2025-09-03T23:25:44.949323440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 3 23:25:44.949362 containerd[1519]: time="2025-09-03T23:25:44.949338960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:25:44.949362 containerd[1519]: time="2025-09-03T23:25:44.949352360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 3 23:25:44.949362 containerd[1519]: time="2025-09-03T23:25:44.949360680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 3 23:25:44.949434 containerd[1519]: time="2025-09-03T23:25:44.949376400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 3 23:25:44.949434 containerd[1519]: time="2025-09-03T23:25:44.949390560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.949465720Z" level=info msg="runtime interface created" Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.949481120Z" level=info msg="created NRI interface" Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.949491280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.949506960Z" level=info msg="Connect containerd service" Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.949540520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 3 23:25:44.951172 containerd[1519]: time="2025-09-03T23:25:44.950557880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:25:45.031589 containerd[1519]: time="2025-09-03T23:25:45.031478004Z" level=info msg="Start subscribing containerd event" Sep 3 23:25:45.031743 containerd[1519]: time="2025-09-03T23:25:45.031727833Z" level=info msg="Start recovering state" Sep 3 23:25:45.031835 containerd[1519]: time="2025-09-03T23:25:45.031803814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 3 23:25:45.031871 containerd[1519]: time="2025-09-03T23:25:45.031860280Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 3 23:25:45.031954 containerd[1519]: time="2025-09-03T23:25:45.031939650Z" level=info msg="Start event monitor" Sep 3 23:25:45.032036 containerd[1519]: time="2025-09-03T23:25:45.032023226Z" level=info msg="Start cni network conf syncer for default" Sep 3 23:25:45.032095 containerd[1519]: time="2025-09-03T23:25:45.032084713Z" level=info msg="Start streaming server" Sep 3 23:25:45.032141 containerd[1519]: time="2025-09-03T23:25:45.032130890Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 3 23:25:45.032217 containerd[1519]: time="2025-09-03T23:25:45.032205524Z" level=info msg="runtime interface starting up..." Sep 3 23:25:45.032262 containerd[1519]: time="2025-09-03T23:25:45.032251701Z" level=info msg="starting plugins..." Sep 3 23:25:45.032326 containerd[1519]: time="2025-09-03T23:25:45.032315189Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 3 23:25:45.033308 containerd[1519]: time="2025-09-03T23:25:45.033286985Z" level=info msg="containerd successfully booted in 0.106053s" Sep 3 23:25:45.033366 systemd[1]: Started containerd.service - containerd container runtime. Sep 3 23:25:45.159225 tar[1506]: linux-arm64/README.md Sep 3 23:25:45.178202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 3 23:25:45.664531 sshd_keygen[1507]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 3 23:25:45.684211 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 3 23:25:45.687055 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 3 23:25:45.708143 systemd[1]: issuegen.service: Deactivated successfully. Sep 3 23:25:45.709229 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 3 23:25:45.711474 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 3 23:25:45.733209 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 3 23:25:45.735434 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 3 23:25:45.737356 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 3 23:25:45.738369 systemd[1]: Reached target getty.target - Login Prompts. Sep 3 23:25:46.085738 systemd-networkd[1430]: eth0: Gained IPv6LL Sep 3 23:25:46.087842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 3 23:25:46.089262 systemd[1]: Reached target network-online.target - Network is Online. Sep 3 23:25:46.091203 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 3 23:25:46.093251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:46.105892 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 3 23:25:46.123193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 3 23:25:46.124448 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 3 23:25:46.124614 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 3 23:25:46.126913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 3 23:25:46.639663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:46.640935 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 3 23:25:46.644515 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:25:46.645255 systemd[1]: Startup finished in 1.990s (kernel) + 5.451s (initrd) + 3.644s (userspace) = 11.087s. Sep 3 23:25:46.974204 kubelet[1612]: E0903 23:25:46.974086 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:25:46.976393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:25:46.976527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:25:46.978253 systemd[1]: kubelet.service: Consumed 722ms CPU time, 255.5M memory peak. Sep 3 23:25:50.043820 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 3 23:25:50.045009 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:59936.service - OpenSSH per-connection server daemon (10.0.0.1:59936). Sep 3 23:25:50.138923 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 59936 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:50.140484 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:50.151228 systemd-logind[1491]: New session 1 of user core. Sep 3 23:25:50.151974 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 3 23:25:50.152949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 3 23:25:50.175191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 3 23:25:50.177377 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 3 23:25:50.198068 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 3 23:25:50.200349 systemd-logind[1491]: New session c1 of user core. Sep 3 23:25:50.315610 systemd[1629]: Queued start job for default target default.target. Sep 3 23:25:50.326075 systemd[1629]: Created slice app.slice - User Application Slice. Sep 3 23:25:50.326103 systemd[1629]: Reached target paths.target - Paths. Sep 3 23:25:50.326136 systemd[1629]: Reached target timers.target - Timers. Sep 3 23:25:50.327488 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 3 23:25:50.336192 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 3 23:25:50.336247 systemd[1629]: Reached target sockets.target - Sockets. Sep 3 23:25:50.336281 systemd[1629]: Reached target basic.target - Basic System. Sep 3 23:25:50.336308 systemd[1629]: Reached target default.target - Main User Target. Sep 3 23:25:50.336333 systemd[1629]: Startup finished in 130ms. Sep 3 23:25:50.336508 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 3 23:25:50.338486 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 3 23:25:50.405653 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:59948.service - OpenSSH per-connection server daemon (10.0.0.1:59948). Sep 3 23:25:50.461139 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 59948 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:50.462350 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:50.466064 systemd-logind[1491]: New session 2 of user core. Sep 3 23:25:50.478314 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 3 23:25:50.529367 sshd[1642]: Connection closed by 10.0.0.1 port 59948 Sep 3 23:25:50.529923 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:50.539085 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:59948.service: Deactivated successfully. Sep 3 23:25:50.541788 systemd[1]: session-2.scope: Deactivated successfully. Sep 3 23:25:50.543308 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Sep 3 23:25:50.546317 systemd-logind[1491]: Removed session 2. Sep 3 23:25:50.547030 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:59964.service - OpenSSH per-connection server daemon (10.0.0.1:59964). Sep 3 23:25:50.604612 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 59964 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:50.605815 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:50.609655 systemd-logind[1491]: New session 3 of user core. Sep 3 23:25:50.617304 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 3 23:25:50.664970 sshd[1650]: Connection closed by 10.0.0.1 port 59964 Sep 3 23:25:50.664826 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:50.686142 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:59964.service: Deactivated successfully. Sep 3 23:25:50.687594 systemd[1]: session-3.scope: Deactivated successfully. Sep 3 23:25:50.688294 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Sep 3 23:25:50.691610 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:59974.service - OpenSSH per-connection server daemon (10.0.0.1:59974). Sep 3 23:25:50.692192 systemd-logind[1491]: Removed session 3. Sep 3 23:25:50.740823 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 59974 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:50.742078 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:50.746820 systemd-logind[1491]: New session 4 of user core. Sep 3 23:25:50.761346 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 3 23:25:50.813209 sshd[1658]: Connection closed by 10.0.0.1 port 59974 Sep 3 23:25:50.813060 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:50.827073 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:59974.service: Deactivated successfully. Sep 3 23:25:50.828588 systemd[1]: session-4.scope: Deactivated successfully. Sep 3 23:25:50.830720 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Sep 3 23:25:50.833295 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:59976.service - OpenSSH per-connection server daemon (10.0.0.1:59976). Sep 3 23:25:50.833979 systemd-logind[1491]: Removed session 4. Sep 3 23:25:50.889324 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 59976 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:50.890800 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:50.895360 systemd-logind[1491]: New session 5 of user core. Sep 3 23:25:50.906311 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 3 23:25:50.962623 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 3 23:25:50.962895 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:25:50.972725 sudo[1667]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:50.974216 sshd[1666]: Connection closed by 10.0.0.1 port 59976 Sep 3 23:25:50.974626 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:50.987963 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:59976.service: Deactivated successfully. Sep 3 23:25:50.989256 systemd[1]: session-5.scope: Deactivated successfully. Sep 3 23:25:50.989850 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Sep 3 23:25:50.992127 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:59982.service - OpenSSH per-connection server daemon (10.0.0.1:59982). Sep 3 23:25:50.992572 systemd-logind[1491]: Removed session 5. Sep 3 23:25:51.041044 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 59982 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:51.042342 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:51.046265 systemd-logind[1491]: New session 6 of user core. Sep 3 23:25:51.056377 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 3 23:25:51.106987 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 3 23:25:51.107262 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:25:51.112188 sudo[1677]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:51.116352 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 3 23:25:51.116597 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:25:51.133456 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 3 23:25:51.167677 augenrules[1699]: No rules Sep 3 23:25:51.168915 systemd[1]: audit-rules.service: Deactivated successfully. Sep 3 23:25:51.169252 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 3 23:25:51.170489 sudo[1676]: pam_unix(sudo:session): session closed for user root Sep 3 23:25:51.171678 sshd[1675]: Connection closed by 10.0.0.1 port 59982 Sep 3 23:25:51.173224 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 3 23:25:51.183265 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:59982.service: Deactivated successfully. Sep 3 23:25:51.185428 systemd[1]: session-6.scope: Deactivated successfully. Sep 3 23:25:51.187330 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Sep 3 23:25:51.189689 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:59986.service - OpenSSH per-connection server daemon (10.0.0.1:59986). Sep 3 23:25:51.193168 systemd-logind[1491]: Removed session 6. Sep 3 23:25:51.242147 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59986 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:25:51.243931 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:25:51.254520 systemd-logind[1491]: New session 7 of user core. Sep 3 23:25:51.262319 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 3 23:25:51.314723 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 3 23:25:51.314987 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 3 23:25:51.636410 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 3 23:25:51.651563 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 3 23:25:51.868063 dockerd[1732]: time="2025-09-03T23:25:51.868005276Z" level=info msg="Starting up" Sep 3 23:25:51.870021 dockerd[1732]: time="2025-09-03T23:25:51.869904525Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 3 23:25:51.994468 dockerd[1732]: time="2025-09-03T23:25:51.994364347Z" level=info msg="Loading containers: start." Sep 3 23:25:52.002179 kernel: Initializing XFRM netlink socket Sep 3 23:25:52.182308 systemd-networkd[1430]: docker0: Link UP Sep 3 23:25:52.188065 dockerd[1732]: time="2025-09-03T23:25:52.188009349Z" level=info msg="Loading containers: done." Sep 3 23:25:52.199726 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck190195082-merged.mount: Deactivated successfully. Sep 3 23:25:52.203918 dockerd[1732]: time="2025-09-03T23:25:52.203866058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 3 23:25:52.203990 dockerd[1732]: time="2025-09-03T23:25:52.203961226Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 3 23:25:52.204095 dockerd[1732]: time="2025-09-03T23:25:52.204068329Z" level=info msg="Initializing buildkit" Sep 3 23:25:52.231412 dockerd[1732]: time="2025-09-03T23:25:52.231365241Z" level=info msg="Completed buildkit initialization" Sep 3 23:25:52.235981 dockerd[1732]: time="2025-09-03T23:25:52.235940628Z" level=info msg="Daemon has completed initialization" Sep 3 23:25:52.237244 dockerd[1732]: time="2025-09-03T23:25:52.236018779Z" level=info msg="API listen on /run/docker.sock" Sep 3 23:25:52.236233 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 3 23:25:52.797694 containerd[1519]: time="2025-09-03T23:25:52.797429296Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 3 23:25:53.401880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398169937.mount: Deactivated successfully. Sep 3 23:25:54.450134 containerd[1519]: time="2025-09-03T23:25:54.450081389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:54.451056 containerd[1519]: time="2025-09-03T23:25:54.450909151Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 3 23:25:54.451934 containerd[1519]: time="2025-09-03T23:25:54.451907770Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:54.455246 containerd[1519]: time="2025-09-03T23:25:54.455219743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:54.455699 containerd[1519]: time="2025-09-03T23:25:54.455667232Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.658112661s" Sep 3 23:25:54.455745 containerd[1519]: time="2025-09-03T23:25:54.455699552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 3 23:25:54.456412 containerd[1519]: time="2025-09-03T23:25:54.456337942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 3 23:25:55.796474 containerd[1519]: time="2025-09-03T23:25:55.796427080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.797025 containerd[1519]: time="2025-09-03T23:25:55.796996686Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 3 23:25:55.797700 containerd[1519]: time="2025-09-03T23:25:55.797675042Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.800101 containerd[1519]: time="2025-09-03T23:25:55.800067766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:55.801070 containerd[1519]: time="2025-09-03T23:25:55.801039552Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.344672878s" Sep 3 23:25:55.801120 containerd[1519]: time="2025-09-03T23:25:55.801072410Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 3 23:25:55.801509 containerd[1519]: time="2025-09-03T23:25:55.801457899Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 3 23:25:56.878889 containerd[1519]: time="2025-09-03T23:25:56.878842656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:56.879446 containerd[1519]: time="2025-09-03T23:25:56.879414006Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 3 23:25:56.880192 containerd[1519]: time="2025-09-03T23:25:56.880168104Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:56.883159 containerd[1519]: time="2025-09-03T23:25:56.882669009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:56.883658 containerd[1519]: time="2025-09-03T23:25:56.883636961Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.082149066s" Sep 3 23:25:56.883698 containerd[1519]: time="2025-09-03T23:25:56.883664050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 3 23:25:56.884210 containerd[1519]: time="2025-09-03T23:25:56.884186247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 3 23:25:57.226987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 3 23:25:57.228306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:25:57.358541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:25:57.361583 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 3 23:25:57.397225 kubelet[2014]: E0903 23:25:57.397165 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 3 23:25:57.400376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 3 23:25:57.400518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 3 23:25:57.402227 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.9M memory peak. Sep 3 23:25:57.859209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157643113.mount: Deactivated successfully. Sep 3 23:25:58.214208 containerd[1519]: time="2025-09-03T23:25:58.214075527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:58.215362 containerd[1519]: time="2025-09-03T23:25:58.215336753Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 3 23:25:58.216171 containerd[1519]: time="2025-09-03T23:25:58.216128392Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:58.218486 containerd[1519]: time="2025-09-03T23:25:58.218455133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:58.219478 containerd[1519]: time="2025-09-03T23:25:58.219445655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.335137679s" Sep 3 23:25:58.219518 containerd[1519]: time="2025-09-03T23:25:58.219478976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 3 23:25:58.219963 containerd[1519]: time="2025-09-03T23:25:58.219942542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 3 23:25:58.737397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918164073.mount: Deactivated successfully. Sep 3 23:25:59.438832 containerd[1519]: time="2025-09-03T23:25:59.438779420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.439847 containerd[1519]: time="2025-09-03T23:25:59.439779523Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 3 23:25:59.441003 containerd[1519]: time="2025-09-03T23:25:59.440551621Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.446171 containerd[1519]: time="2025-09-03T23:25:59.446124280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:25:59.447928 containerd[1519]: time="2025-09-03T23:25:59.447886811Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.227914003s" Sep 3 23:25:59.447928 containerd[1519]: time="2025-09-03T23:25:59.447925454Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 3 23:25:59.448385 containerd[1519]: time="2025-09-03T23:25:59.448331788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 3 23:25:59.860401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4627247.mount: Deactivated successfully. Sep 3 23:25:59.864457 containerd[1519]: time="2025-09-03T23:25:59.864410888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:25:59.864809 containerd[1519]: time="2025-09-03T23:25:59.864779421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 3 23:25:59.865742 containerd[1519]: time="2025-09-03T23:25:59.865719533Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:25:59.867544 containerd[1519]: time="2025-09-03T23:25:59.867518580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 3 23:25:59.868082 containerd[1519]: time="2025-09-03T23:25:59.868056774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 419.695733ms" Sep 3 23:25:59.868131 containerd[1519]: time="2025-09-03T23:25:59.868089397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 3 23:25:59.868882 containerd[1519]: time="2025-09-03T23:25:59.868713905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 3 23:26:00.353087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519499565.mount: Deactivated successfully. Sep 3 23:26:02.328799 containerd[1519]: time="2025-09-03T23:26:02.328745121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:02.329891 containerd[1519]: time="2025-09-03T23:26:02.329650694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 3 23:26:02.330729 containerd[1519]: time="2025-09-03T23:26:02.330704825Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:02.469454 containerd[1519]: time="2025-09-03T23:26:02.469359024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:02.470694 containerd[1519]: time="2025-09-03T23:26:02.470564597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.601819843s" Sep 3 23:26:02.470694 containerd[1519]: time="2025-09-03T23:26:02.470601556Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 3 23:26:07.496317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 3 23:26:07.497782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:26:07.512678 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:26:07.512749 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:26:07.512961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:26:07.516141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:26:07.538133 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... Sep 3 23:26:07.538166 systemd[1]: Reloading... Sep 3 23:26:07.615179 zram_generator::config[2213]: No configuration found. Sep 3 23:26:07.705772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:26:07.792298 systemd[1]: Reloading finished in 253 ms. Sep 3 23:26:07.836587 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 3 23:26:07.836658 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 3 23:26:07.836881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:26:07.836925 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Sep 3 23:26:07.838375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:26:07.953570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:26:07.966384 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:26:07.996769 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:26:07.996769 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:26:07.996769 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:26:07.997045 kubelet[2258]: I0903 23:26:07.996816 2258 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:26:08.510188 kubelet[2258]: I0903 23:26:08.510029 2258 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:26:08.510188 kubelet[2258]: I0903 23:26:08.510063 2258 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:26:08.510357 kubelet[2258]: I0903 23:26:08.510340 2258 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:26:08.534831 kubelet[2258]: I0903 23:26:08.534783 2258 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:26:08.535189 kubelet[2258]: E0903 23:26:08.535156 2258 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:26:08.539626 kubelet[2258]: I0903 23:26:08.539606 2258 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:26:08.542233 kubelet[2258]: I0903 23:26:08.542218 2258 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:26:08.543344 kubelet[2258]: I0903 23:26:08.543307 2258 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:26:08.543502 kubelet[2258]: I0903 23:26:08.543341 2258 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:26:08.543585 kubelet[2258]: I0903 23:26:08.543575 2258 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:26:08.543585 kubelet[2258]: I0903 23:26:08.543584 2258 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:26:08.543772 kubelet[2258]: I0903 23:26:08.543757 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:26:08.546456 kubelet[2258]: I0903 23:26:08.546441 2258 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:26:08.546504 kubelet[2258]: I0903 23:26:08.546470 2258 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:26:08.546504 kubelet[2258]: I0903 23:26:08.546494 2258 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:26:08.546504 kubelet[2258]: I0903 23:26:08.546505 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:26:08.549674 kubelet[2258]: I0903 23:26:08.549643 2258 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:26:08.551672 kubelet[2258]: I0903 23:26:08.551349 2258 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:26:08.551672 kubelet[2258]: W0903 23:26:08.551486 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 3 23:26:08.552349 kubelet[2258]: I0903 23:26:08.552320 2258 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:26:08.552349 kubelet[2258]: I0903 23:26:08.552354 2258 server.go:1287] "Started kubelet" Sep 3 23:26:08.552589 kubelet[2258]: W0903 23:26:08.552545 2258 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 3 23:26:08.552679 kubelet[2258]: E0903 23:26:08.552657 2258 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:26:08.555600 kubelet[2258]: I0903 23:26:08.555574 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:26:08.557340 kubelet[2258]: I0903 23:26:08.557255 2258 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:26:08.558357 kubelet[2258]: I0903 23:26:08.558293 2258 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:26:08.558928 kubelet[2258]: E0903 23:26:08.558520 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861e96966899bd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:26:08.552336345 +0000 UTC m=+0.583065851,LastTimestamp:2025-09-03 23:26:08.552336345 +0000 UTC m=+0.583065851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:26:08.559842 kubelet[2258]: I0903 23:26:08.559774 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:26:08.560066 kubelet[2258]: I0903 23:26:08.560038 2258 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:26:08.560296 kubelet[2258]: I0903 23:26:08.560279 2258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:26:08.560690 kubelet[2258]: I0903 23:26:08.560665 2258 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:26:08.560967 kubelet[2258]: E0903 23:26:08.560945 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:26:08.561094 kubelet[2258]: I0903 23:26:08.561084 2258 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:26:08.561165 kubelet[2258]: I0903 23:26:08.561142 2258 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:26:08.563214 kubelet[2258]: E0903 23:26:08.562607 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Sep 3 23:26:08.563214 kubelet[2258]: W0903 23:26:08.562683 2258 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 3 23:26:08.563214 kubelet[2258]: E0903 23:26:08.562718 2258 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:26:08.563214 kubelet[2258]: W0903 23:26:08.562920 2258 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 3 23:26:08.563214 kubelet[2258]: E0903 23:26:08.562962 2258 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:26:08.563908 kubelet[2258]: E0903 23:26:08.563827 2258 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:26:08.564010 kubelet[2258]: I0903 23:26:08.563991 2258 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:26:08.564010 kubelet[2258]: I0903 23:26:08.564005 2258 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:26:08.564096 kubelet[2258]: I0903 23:26:08.564077 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:26:08.573215 kubelet[2258]: I0903 23:26:08.573168 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:26:08.574845 kubelet[2258]: I0903 23:26:08.574112 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:26:08.574845 kubelet[2258]: I0903 23:26:08.574135 2258 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:26:08.574845 kubelet[2258]: I0903 23:26:08.574173 2258 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:26:08.574845 kubelet[2258]: I0903 23:26:08.574181 2258 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:26:08.574845 kubelet[2258]: E0903 23:26:08.574218 2258 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:26:08.578773 kubelet[2258]: W0903 23:26:08.578726 2258 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Sep 3 23:26:08.578830 kubelet[2258]: E0903 23:26:08.578783 2258 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Sep 3 23:26:08.579446 kubelet[2258]: I0903 23:26:08.579432 2258 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:26:08.579747 kubelet[2258]: I0903 23:26:08.579538 2258 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:26:08.579747 kubelet[2258]: I0903 23:26:08.579560 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:26:08.661898 kubelet[2258]: E0903 23:26:08.661848 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:26:08.675072 kubelet[2258]: E0903 23:26:08.675038 2258 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 3 23:26:08.692188 kubelet[2258]: I0903 23:26:08.692169 2258 policy_none.go:49] "None policy: Start" Sep 3 23:26:08.692304 kubelet[2258]: I0903 23:26:08.692267 2258 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:26:08.692615 kubelet[2258]: I0903 23:26:08.692291 2258 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:26:08.698095 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 3 23:26:08.709828 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 3 23:26:08.712334 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 3 23:26:08.734900 kubelet[2258]: I0903 23:26:08.734875 2258 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:26:08.735073 kubelet[2258]: I0903 23:26:08.735044 2258 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:26:08.735105 kubelet[2258]: I0903 23:26:08.735061 2258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:26:08.735281 kubelet[2258]: I0903 23:26:08.735266 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:26:08.736222 kubelet[2258]: E0903 23:26:08.736194 2258 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:26:08.736276 kubelet[2258]: E0903 23:26:08.736240 2258 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 3 23:26:08.763848 kubelet[2258]: E0903 23:26:08.763772 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Sep 3 23:26:08.836891 kubelet[2258]: I0903 23:26:08.836859 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 3 23:26:08.837268 kubelet[2258]: E0903 23:26:08.837245 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 3 23:26:08.882395 systemd[1]: Created slice kubepods-burstable-pod751a97f3bff819b2598b894175e43bc2.slice - libcontainer container kubepods-burstable-pod751a97f3bff819b2598b894175e43bc2.slice. Sep 3 23:26:08.890739 kubelet[2258]: E0903 23:26:08.890721 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:08.893738 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 3 23:26:08.895309 kubelet[2258]: E0903 23:26:08.895166 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:08.896419 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 3 23:26:08.897688 kubelet[2258]: E0903 23:26:08.897670 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:08.963091 kubelet[2258]: I0903 23:26:08.963059 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:08.963172 kubelet[2258]: I0903 23:26:08.963096 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:08.963172 kubelet[2258]: I0903 23:26:08.963116 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:08.963172 kubelet[2258]: I0903 23:26:08.963132 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:08.963172 kubelet[2258]: I0903 23:26:08.963159 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:08.963264 kubelet[2258]: I0903 23:26:08.963176 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:08.963264 kubelet[2258]: I0903 23:26:08.963193 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:08.963264 kubelet[2258]: I0903 23:26:08.963207 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:08.963264 kubelet[2258]: I0903 23:26:08.963222 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:09.038529 kubelet[2258]: I0903 23:26:09.038447 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 3 23:26:09.039115 kubelet[2258]: E0903 23:26:09.038740 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 3 23:26:09.164531 kubelet[2258]: E0903 23:26:09.164486 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Sep 3 23:26:09.191759 kubelet[2258]: E0903 23:26:09.191707 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.192345 containerd[1519]: time="2025-09-03T23:26:09.192312094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:751a97f3bff819b2598b894175e43bc2,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:09.195455 kubelet[2258]: E0903 23:26:09.195436 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.195751 containerd[1519]: time="2025-09-03T23:26:09.195728441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:09.198135 kubelet[2258]: E0903 23:26:09.198112 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.198649 containerd[1519]: time="2025-09-03T23:26:09.198622311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:09.218931 containerd[1519]: time="2025-09-03T23:26:09.218884879Z" level=info msg="connecting to shim 80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd" address="unix:///run/containerd/s/0eb42254cfcb5697582ee73c7d7198d96f97dff2cf0811fcdc2312a34ded5b1c" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:09.222831 containerd[1519]: time="2025-09-03T23:26:09.222794041Z" level=info msg="connecting to shim b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499" address="unix:///run/containerd/s/8f83cd843a6a7181e2a9d831cb15a3b9979509689fda19cdf7ed8febed260d3b" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:09.227416 containerd[1519]: time="2025-09-03T23:26:09.227388097Z" level=info msg="connecting to shim 6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc" address="unix:///run/containerd/s/baf02acb395eaffdbd4e55965a777a6f2841af2fd77e144dd3464e7b8e4bfd8e" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:09.229468 kubelet[2258]: E0903 23:26:09.229304 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861e96966899bd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-03 23:26:08.552336345 +0000 UTC m=+0.583065851,LastTimestamp:2025-09-03 23:26:08.552336345 +0000 UTC m=+0.583065851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 3 23:26:09.247297 systemd[1]: Started cri-containerd-80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd.scope - libcontainer container 80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd. Sep 3 23:26:09.251030 systemd[1]: Started cri-containerd-6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc.scope - libcontainer container 6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc. Sep 3 23:26:09.253062 systemd[1]: Started cri-containerd-b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499.scope - libcontainer container b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499. Sep 3 23:26:09.290441 containerd[1519]: time="2025-09-03T23:26:09.290245220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc\"" Sep 3 23:26:09.292095 kubelet[2258]: E0903 23:26:09.292062 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.296459 containerd[1519]: time="2025-09-03T23:26:09.296337974Z" level=info msg="CreateContainer within sandbox \"6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 3 23:26:09.296985 containerd[1519]: time="2025-09-03T23:26:09.296942002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499\"" Sep 3 23:26:09.297559 kubelet[2258]: E0903 23:26:09.297539 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.298961 containerd[1519]: time="2025-09-03T23:26:09.298917179Z" level=info msg="CreateContainer within sandbox \"b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 3 23:26:09.300845 containerd[1519]: time="2025-09-03T23:26:09.300817495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:751a97f3bff819b2598b894175e43bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd\"" Sep 3 23:26:09.301640 kubelet[2258]: E0903 23:26:09.301594 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.303407 containerd[1519]: time="2025-09-03T23:26:09.303367555Z" level=info msg="CreateContainer within sandbox \"80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 3 23:26:09.308023 containerd[1519]: time="2025-09-03T23:26:09.307926462Z" level=info msg="Container 316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.308676 containerd[1519]: time="2025-09-03T23:26:09.308653913Z" level=info msg="Container 3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.310560 containerd[1519]: time="2025-09-03T23:26:09.310535612Z" level=info msg="Container a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:09.315770 containerd[1519]: time="2025-09-03T23:26:09.315738660Z" level=info msg="CreateContainer within sandbox \"6686a88eed6a13f9c3223297aa4c59f4cf98cf204b72f0703145ae299dfab8fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46\"" Sep 3 23:26:09.316990 containerd[1519]: time="2025-09-03T23:26:09.316389406Z" level=info msg="StartContainer for \"316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46\"" Sep 3 23:26:09.316990 containerd[1519]: time="2025-09-03T23:26:09.316840785Z" level=info msg="CreateContainer within sandbox \"b51477f09e87219b0c7676a98e6a5dd8ba9f3db6f9852f9c5aa5237954671499\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee\"" Sep 3 23:26:09.317647 containerd[1519]: time="2025-09-03T23:26:09.317495094Z" level=info msg="StartContainer for \"3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee\"" Sep 3 23:26:09.318112 containerd[1519]: time="2025-09-03T23:26:09.317941389Z" level=info msg="CreateContainer within sandbox \"80a569350c515acd308593818ba52efc8797433170a6725935b532e80830b5dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552\"" Sep 3 23:26:09.318180 containerd[1519]: time="2025-09-03T23:26:09.317968892Z" level=info msg="connecting to shim 316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46" address="unix:///run/containerd/s/baf02acb395eaffdbd4e55965a777a6f2841af2fd77e144dd3464e7b8e4bfd8e" protocol=ttrpc version=3 Sep 3 23:26:09.318570 containerd[1519]: time="2025-09-03T23:26:09.318546056Z" level=info msg="StartContainer for \"a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552\"" Sep 3 23:26:09.319460 containerd[1519]: time="2025-09-03T23:26:09.319425034Z" level=info msg="connecting to shim 3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee" address="unix:///run/containerd/s/8f83cd843a6a7181e2a9d831cb15a3b9979509689fda19cdf7ed8febed260d3b" protocol=ttrpc version=3 Sep 3 23:26:09.320268 containerd[1519]: time="2025-09-03T23:26:09.320240038Z" level=info msg="connecting to shim a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552" address="unix:///run/containerd/s/0eb42254cfcb5697582ee73c7d7198d96f97dff2cf0811fcdc2312a34ded5b1c" protocol=ttrpc version=3 Sep 3 23:26:09.347283 systemd[1]: Started cri-containerd-316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46.scope - libcontainer container 316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46. Sep 3 23:26:09.348109 systemd[1]: Started cri-containerd-3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee.scope - libcontainer container 3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee. Sep 3 23:26:09.348937 systemd[1]: Started cri-containerd-a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552.scope - libcontainer container a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552. Sep 3 23:26:09.393699 containerd[1519]: time="2025-09-03T23:26:09.393551897Z" level=info msg="StartContainer for \"316c3a7a5da1c295db7e094d066cf1e619eeb0e3195af3796211586898842a46\" returns successfully" Sep 3 23:26:09.394780 containerd[1519]: time="2025-09-03T23:26:09.394734770Z" level=info msg="StartContainer for \"3f26712b057083d8abf9cbb791e4ffd9b2c8ca087af044db6cf90be4c83ed6ee\" returns successfully" Sep 3 23:26:09.395246 containerd[1519]: time="2025-09-03T23:26:09.394893183Z" level=info msg="StartContainer for \"a56e7bc82fca93e98c79babde69747cb340b46339310f0395e178d8173e4d552\" returns successfully" Sep 3 23:26:09.441172 kubelet[2258]: I0903 23:26:09.441097 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 3 23:26:09.441490 kubelet[2258]: E0903 23:26:09.441451 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Sep 3 23:26:09.586833 kubelet[2258]: E0903 23:26:09.586736 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:09.588179 kubelet[2258]: E0903 23:26:09.587360 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.590454 kubelet[2258]: E0903 23:26:09.590435 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:09.590774 kubelet[2258]: E0903 23:26:09.590529 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:09.593666 kubelet[2258]: E0903 23:26:09.593645 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:09.593774 kubelet[2258]: E0903 23:26:09.593754 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:10.244200 kubelet[2258]: I0903 23:26:10.243780 2258 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 3 23:26:10.596347 kubelet[2258]: E0903 23:26:10.596192 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:10.596347 kubelet[2258]: E0903 23:26:10.596315 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:10.597184 kubelet[2258]: E0903 23:26:10.596466 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:10.597184 kubelet[2258]: E0903 23:26:10.596552 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:10.597184 kubelet[2258]: E0903 23:26:10.596719 2258 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 3 23:26:10.597184 kubelet[2258]: E0903 23:26:10.596812 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:10.732810 kubelet[2258]: E0903 23:26:10.732741 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 3 23:26:10.834181 kubelet[2258]: I0903 23:26:10.833676 2258 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 3 23:26:10.862206 kubelet[2258]: I0903 23:26:10.862112 2258 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:10.867489 kubelet[2258]: E0903 23:26:10.867465 2258 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:10.867604 kubelet[2258]: I0903 23:26:10.867593 2258 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:10.869044 kubelet[2258]: E0903 23:26:10.869023 2258 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:10.869117 kubelet[2258]: I0903 23:26:10.869107 2258 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:10.870507 kubelet[2258]: E0903 23:26:10.870464 2258 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:11.551169 kubelet[2258]: I0903 23:26:11.550979 2258 apiserver.go:52] "Watching apiserver" Sep 3 23:26:11.561209 kubelet[2258]: I0903 23:26:11.561174 2258 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:26:12.231691 kubelet[2258]: I0903 23:26:12.231657 2258 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:12.236623 kubelet[2258]: E0903 23:26:12.236602 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:12.599285 kubelet[2258]: E0903 23:26:12.599223 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:12.891912 systemd[1]: Reload requested from client PID 2533 ('systemctl') (unit session-7.scope)... Sep 3 23:26:12.891928 systemd[1]: Reloading... Sep 3 23:26:12.964228 zram_generator::config[2578]: No configuration found. Sep 3 23:26:13.035953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 3 23:26:13.134244 systemd[1]: Reloading finished in 242 ms. Sep 3 23:26:13.163892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:26:13.177091 systemd[1]: kubelet.service: Deactivated successfully. Sep 3 23:26:13.177480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:26:13.177542 systemd[1]: kubelet.service: Consumed 929ms CPU time, 128.8M memory peak. Sep 3 23:26:13.179273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 3 23:26:13.323667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 3 23:26:13.327302 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 3 23:26:13.365089 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:26:13.365089 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 3 23:26:13.365089 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 3 23:26:13.365462 kubelet[2618]: I0903 23:26:13.365168 2618 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 3 23:26:13.372186 kubelet[2618]: I0903 23:26:13.372134 2618 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 3 23:26:13.372186 kubelet[2618]: I0903 23:26:13.372181 2618 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 3 23:26:13.372429 kubelet[2618]: I0903 23:26:13.372401 2618 server.go:954] "Client rotation is on, will bootstrap in background" Sep 3 23:26:13.373568 kubelet[2618]: I0903 23:26:13.373544 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 3 23:26:13.376603 kubelet[2618]: I0903 23:26:13.376354 2618 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 3 23:26:13.379788 kubelet[2618]: I0903 23:26:13.379758 2618 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 3 23:26:13.385191 kubelet[2618]: I0903 23:26:13.383992 2618 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 3 23:26:13.385191 kubelet[2618]: I0903 23:26:13.384193 2618 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 3 23:26:13.385191 kubelet[2618]: I0903 23:26:13.384215 2618 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 3 23:26:13.385191 kubelet[2618]: I0903 23:26:13.384366 2618 topology_manager.go:138] "Creating topology manager with none policy" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384374 2618 container_manager_linux.go:304] "Creating device plugin manager" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384411 2618 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384529 2618 kubelet.go:446] "Attempting to sync node with API server" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384539 2618 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384560 2618 kubelet.go:352] "Adding apiserver pod source" Sep 3 23:26:13.385400 kubelet[2618]: I0903 23:26:13.384571 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 3 23:26:13.385877 kubelet[2618]: I0903 23:26:13.385851 2618 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 3 23:26:13.386481 kubelet[2618]: I0903 23:26:13.386467 2618 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 3 23:26:13.387051 kubelet[2618]: I0903 23:26:13.387038 2618 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 3 23:26:13.387175 kubelet[2618]: I0903 23:26:13.387165 2618 server.go:1287] "Started kubelet" Sep 3 23:26:13.388922 kubelet[2618]: I0903 23:26:13.388895 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 3 23:26:13.390533 kubelet[2618]: I0903 23:26:13.390493 2618 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 3 23:26:13.390790 kubelet[2618]: I0903 23:26:13.390660 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 3 23:26:13.391299 kubelet[2618]: I0903 23:26:13.391274 2618 server.go:479] "Adding debug handlers to kubelet server" Sep 3 23:26:13.391299 kubelet[2618]: I0903 23:26:13.391294 2618 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 3 23:26:13.392002 kubelet[2618]: E0903 23:26:13.391970 2618 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 3 23:26:13.392102 kubelet[2618]: I0903 23:26:13.391282 2618 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 3 23:26:13.392410 kubelet[2618]: E0903 23:26:13.392384 2618 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 3 23:26:13.392919 kubelet[2618]: I0903 23:26:13.392877 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 3 23:26:13.393216 kubelet[2618]: I0903 23:26:13.393042 2618 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 3 23:26:13.393290 kubelet[2618]: I0903 23:26:13.393248 2618 reconciler.go:26] "Reconciler: start to sync state" Sep 3 23:26:13.394615 kubelet[2618]: I0903 23:26:13.394587 2618 factory.go:221] Registration of the containerd container factory successfully Sep 3 23:26:13.394615 kubelet[2618]: I0903 23:26:13.394609 2618 factory.go:221] Registration of the systemd container factory successfully Sep 3 23:26:13.394714 kubelet[2618]: I0903 23:26:13.394687 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 3 23:26:13.415037 kubelet[2618]: I0903 23:26:13.414954 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 3 23:26:13.419069 kubelet[2618]: I0903 23:26:13.419050 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 3 23:26:13.419386 kubelet[2618]: I0903 23:26:13.419372 2618 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 3 23:26:13.419482 kubelet[2618]: I0903 23:26:13.419470 2618 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 3 23:26:13.420902 kubelet[2618]: I0903 23:26:13.420886 2618 kubelet.go:2382] "Starting kubelet main sync loop" Sep 3 23:26:13.421027 kubelet[2618]: E0903 23:26:13.421009 2618 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 3 23:26:13.441139 kubelet[2618]: I0903 23:26:13.441120 2618 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 3 23:26:13.441139 kubelet[2618]: I0903 23:26:13.441136 2618 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 3 23:26:13.441139 kubelet[2618]: I0903 23:26:13.441167 2618 state_mem.go:36] "Initialized new in-memory state store" Sep 3 23:26:13.441326 kubelet[2618]: I0903 23:26:13.441306 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 3 23:26:13.441350 kubelet[2618]: I0903 23:26:13.441323 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 3 23:26:13.441350 kubelet[2618]: I0903 23:26:13.441340 2618 policy_none.go:49] "None policy: Start" Sep 3 23:26:13.441350 kubelet[2618]: I0903 23:26:13.441348 2618 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 3 23:26:13.441407 kubelet[2618]: I0903 23:26:13.441357 2618 state_mem.go:35] "Initializing new in-memory state store" Sep 3 23:26:13.441464 kubelet[2618]: I0903 23:26:13.441453 2618 state_mem.go:75] "Updated machine memory state" Sep 3 23:26:13.445471 kubelet[2618]: I0903 23:26:13.445449 2618 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 3 23:26:13.446296 kubelet[2618]: I0903 23:26:13.446093 2618 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 3 23:26:13.446296 kubelet[2618]: I0903 23:26:13.446111 2618 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 3 23:26:13.446507 kubelet[2618]: I0903 23:26:13.446487 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 3 23:26:13.447837 kubelet[2618]: E0903 23:26:13.447749 2618 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 3 23:26:13.522436 kubelet[2618]: I0903 23:26:13.522396 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.522436 kubelet[2618]: I0903 23:26:13.522427 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:13.522675 kubelet[2618]: I0903 23:26:13.522656 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:13.530880 kubelet[2618]: E0903 23:26:13.530850 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:13.549703 kubelet[2618]: I0903 23:26:13.549621 2618 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 3 23:26:13.555180 kubelet[2618]: I0903 23:26:13.555042 2618 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 3 23:26:13.555180 kubelet[2618]: I0903 23:26:13.555106 2618 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 3 23:26:13.593855 kubelet[2618]: I0903 23:26:13.593820 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:13.593855 kubelet[2618]: I0903 23:26:13.593854 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:13.593966 kubelet[2618]: I0903 23:26:13.593873 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:13.593966 kubelet[2618]: I0903 23:26:13.593892 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.593966 kubelet[2618]: I0903 23:26:13.593907 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.593966 kubelet[2618]: I0903 23:26:13.593924 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.593966 kubelet[2618]: I0903 23:26:13.593938 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/751a97f3bff819b2598b894175e43bc2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"751a97f3bff819b2598b894175e43bc2\") " pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:13.594075 kubelet[2618]: I0903 23:26:13.593951 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.594075 kubelet[2618]: I0903 23:26:13.593989 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 3 23:26:13.829566 kubelet[2618]: E0903 23:26:13.829487 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:13.831671 kubelet[2618]: E0903 23:26:13.831610 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:13.831743 kubelet[2618]: E0903 23:26:13.831710 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:14.384940 kubelet[2618]: I0903 23:26:14.384899 2618 apiserver.go:52] "Watching apiserver" Sep 3 23:26:14.392270 kubelet[2618]: I0903 23:26:14.392232 2618 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 3 23:26:14.431037 kubelet[2618]: I0903 23:26:14.430890 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:14.431037 kubelet[2618]: I0903 23:26:14.430942 2618 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:14.432524 kubelet[2618]: E0903 23:26:14.432022 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:14.464977 kubelet[2618]: E0903 23:26:14.464938 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 3 23:26:14.465871 kubelet[2618]: E0903 23:26:14.465100 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:14.465871 kubelet[2618]: E0903 23:26:14.465757 2618 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 3 23:26:14.466154 kubelet[2618]: E0903 23:26:14.466117 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:14.485242 kubelet[2618]: I0903 23:26:14.485183 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.484816734 podStartE2EDuration="1.484816734s" podCreationTimestamp="2025-09-03 23:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:14.483967488 +0000 UTC m=+1.153890235" watchObservedRunningTime="2025-09-03 23:26:14.484816734 +0000 UTC m=+1.154739481" Sep 3 23:26:14.499535 kubelet[2618]: I0903 23:26:14.499469 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.499451957 podStartE2EDuration="2.499451957s" podCreationTimestamp="2025-09-03 23:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:14.493118029 +0000 UTC m=+1.163040816" watchObservedRunningTime="2025-09-03 23:26:14.499451957 +0000 UTC m=+1.169374744" Sep 3 23:26:14.508931 kubelet[2618]: I0903 23:26:14.508878 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.508862251 podStartE2EDuration="1.508862251s" podCreationTimestamp="2025-09-03 23:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:14.500030126 +0000 UTC m=+1.169952954" watchObservedRunningTime="2025-09-03 23:26:14.508862251 +0000 UTC m=+1.178784998" Sep 3 23:26:15.431899 kubelet[2618]: E0903 23:26:15.431869 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:15.432388 kubelet[2618]: E0903 23:26:15.431982 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:18.685499 kubelet[2618]: E0903 23:26:18.685458 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:18.788988 kubelet[2618]: I0903 23:26:18.788957 2618 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 3 23:26:18.789301 containerd[1519]: time="2025-09-03T23:26:18.789265981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 3 23:26:18.789688 kubelet[2618]: I0903 23:26:18.789493 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 3 23:26:19.749056 systemd[1]: Created slice kubepods-besteffort-podcaac4902_32d9_4a6e_bcc8_07c43be8efd2.slice - libcontainer container kubepods-besteffort-podcaac4902_32d9_4a6e_bcc8_07c43be8efd2.slice. Sep 3 23:26:19.835008 kubelet[2618]: I0903 23:26:19.834940 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/caac4902-32d9-4a6e-bcc8-07c43be8efd2-kube-proxy\") pod \"kube-proxy-2xtlm\" (UID: \"caac4902-32d9-4a6e-bcc8-07c43be8efd2\") " pod="kube-system/kube-proxy-2xtlm" Sep 3 23:26:19.835008 kubelet[2618]: I0903 23:26:19.835001 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caac4902-32d9-4a6e-bcc8-07c43be8efd2-xtables-lock\") pod \"kube-proxy-2xtlm\" (UID: \"caac4902-32d9-4a6e-bcc8-07c43be8efd2\") " pod="kube-system/kube-proxy-2xtlm" Sep 3 23:26:19.835406 kubelet[2618]: I0903 23:26:19.835022 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caac4902-32d9-4a6e-bcc8-07c43be8efd2-lib-modules\") pod \"kube-proxy-2xtlm\" (UID: \"caac4902-32d9-4a6e-bcc8-07c43be8efd2\") " pod="kube-system/kube-proxy-2xtlm" Sep 3 23:26:19.835406 kubelet[2618]: I0903 23:26:19.835038 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2th22\" (UniqueName: \"kubernetes.io/projected/caac4902-32d9-4a6e-bcc8-07c43be8efd2-kube-api-access-2th22\") pod \"kube-proxy-2xtlm\" (UID: \"caac4902-32d9-4a6e-bcc8-07c43be8efd2\") " pod="kube-system/kube-proxy-2xtlm" Sep 3 23:26:19.922508 systemd[1]: Created slice kubepods-besteffort-pod0d03120c_db7f_4f1d_81ac_566ac41ba11b.slice - libcontainer container kubepods-besteffort-pod0d03120c_db7f_4f1d_81ac_566ac41ba11b.slice. Sep 3 23:26:19.935440 kubelet[2618]: I0903 23:26:19.935389 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwl2\" (UniqueName: \"kubernetes.io/projected/0d03120c-db7f-4f1d-81ac-566ac41ba11b-kube-api-access-wnwl2\") pod \"tigera-operator-755d956888-4wrjf\" (UID: \"0d03120c-db7f-4f1d-81ac-566ac41ba11b\") " pod="tigera-operator/tigera-operator-755d956888-4wrjf" Sep 3 23:26:19.935595 kubelet[2618]: I0903 23:26:19.935476 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d03120c-db7f-4f1d-81ac-566ac41ba11b-var-lib-calico\") pod \"tigera-operator-755d956888-4wrjf\" (UID: \"0d03120c-db7f-4f1d-81ac-566ac41ba11b\") " pod="tigera-operator/tigera-operator-755d956888-4wrjf" Sep 3 23:26:20.060036 kubelet[2618]: E0903 23:26:20.059992 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:20.060731 containerd[1519]: time="2025-09-03T23:26:20.060577147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xtlm,Uid:caac4902-32d9-4a6e-bcc8-07c43be8efd2,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:20.074739 containerd[1519]: time="2025-09-03T23:26:20.074589266Z" level=info msg="connecting to shim 9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b" address="unix:///run/containerd/s/80f22212fd513dbca860bf8fb64bc77b23582590d6018720ce79447a913c3327" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:20.099306 systemd[1]: Started cri-containerd-9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b.scope - libcontainer container 9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b. Sep 3 23:26:20.119209 containerd[1519]: time="2025-09-03T23:26:20.119126997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xtlm,Uid:caac4902-32d9-4a6e-bcc8-07c43be8efd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b\"" Sep 3 23:26:20.120172 kubelet[2618]: E0903 23:26:20.119926 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:20.122722 containerd[1519]: time="2025-09-03T23:26:20.122676789Z" level=info msg="CreateContainer within sandbox \"9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 3 23:26:20.133429 containerd[1519]: time="2025-09-03T23:26:20.133385897Z" level=info msg="Container 24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:20.139502 containerd[1519]: time="2025-09-03T23:26:20.139463469Z" level=info msg="CreateContainer within sandbox \"9b8196e318f80c7eb125d9febbd500d487f1e2be8e467f2deae4410cf04c504b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5\"" Sep 3 23:26:20.139985 containerd[1519]: time="2025-09-03T23:26:20.139959390Z" level=info msg="StartContainer for \"24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5\"" Sep 3 23:26:20.141606 containerd[1519]: time="2025-09-03T23:26:20.141511891Z" level=info msg="connecting to shim 24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5" address="unix:///run/containerd/s/80f22212fd513dbca860bf8fb64bc77b23582590d6018720ce79447a913c3327" protocol=ttrpc version=3 Sep 3 23:26:20.163337 systemd[1]: Started cri-containerd-24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5.scope - libcontainer container 24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5. Sep 3 23:26:20.198757 containerd[1519]: time="2025-09-03T23:26:20.198713411Z" level=info msg="StartContainer for \"24a7fa1e6394f042635672552a49a5dd6b4642315316a698fa126c370d647ac5\" returns successfully" Sep 3 23:26:20.225173 containerd[1519]: time="2025-09-03T23:26:20.225110090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4wrjf,Uid:0d03120c-db7f-4f1d-81ac-566ac41ba11b,Namespace:tigera-operator,Attempt:0,}" Sep 3 23:26:20.246821 containerd[1519]: time="2025-09-03T23:26:20.246734437Z" level=info msg="connecting to shim c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4" address="unix:///run/containerd/s/ba635fdf3662d7f4807931025714e94ea11ff0ba7674035d0f4444a8716d4087" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:20.268324 systemd[1]: Started cri-containerd-c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4.scope - libcontainer container c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4. Sep 3 23:26:20.299387 containerd[1519]: time="2025-09-03T23:26:20.299349111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4wrjf,Uid:0d03120c-db7f-4f1d-81ac-566ac41ba11b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4\"" Sep 3 23:26:20.301023 containerd[1519]: time="2025-09-03T23:26:20.300934100Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 3 23:26:20.443108 kubelet[2618]: E0903 23:26:20.443018 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:20.453178 kubelet[2618]: I0903 23:26:20.452669 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xtlm" podStartSLOduration=1.452652417 podStartE2EDuration="1.452652417s" podCreationTimestamp="2025-09-03 23:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:20.452628491 +0000 UTC m=+7.122551278" watchObservedRunningTime="2025-09-03 23:26:20.452652417 +0000 UTC m=+7.122575164" Sep 3 23:26:21.546400 kubelet[2618]: E0903 23:26:21.545909 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:21.829985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159153548.mount: Deactivated successfully. Sep 3 23:26:21.881489 kubelet[2618]: E0903 23:26:21.881439 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:22.450466 kubelet[2618]: E0903 23:26:22.447860 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:22.450466 kubelet[2618]: E0903 23:26:22.448512 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:24.016057 containerd[1519]: time="2025-09-03T23:26:24.016002403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:24.017038 containerd[1519]: time="2025-09-03T23:26:24.017012482Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 3 23:26:24.017935 containerd[1519]: time="2025-09-03T23:26:24.017892375Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:24.019741 containerd[1519]: time="2025-09-03T23:26:24.019706813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:24.020397 containerd[1519]: time="2025-09-03T23:26:24.020370343Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.719406117s" Sep 3 23:26:24.020425 containerd[1519]: time="2025-09-03T23:26:24.020403630Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 3 23:26:24.023932 containerd[1519]: time="2025-09-03T23:26:24.023896238Z" level=info msg="CreateContainer within sandbox \"c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 3 23:26:24.032668 containerd[1519]: time="2025-09-03T23:26:24.032630918Z" level=info msg="Container bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:24.035840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760680246.mount: Deactivated successfully. Sep 3 23:26:24.037963 containerd[1519]: time="2025-09-03T23:26:24.037915559Z" level=info msg="CreateContainer within sandbox \"c8fc01e85bae51e55e7771004284b7ad6c6a6c38b0c91e90e6335f86a18bc3b4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63\"" Sep 3 23:26:24.038374 containerd[1519]: time="2025-09-03T23:26:24.038342923Z" level=info msg="StartContainer for \"bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63\"" Sep 3 23:26:24.039093 containerd[1519]: time="2025-09-03T23:26:24.039049022Z" level=info msg="connecting to shim bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63" address="unix:///run/containerd/s/ba635fdf3662d7f4807931025714e94ea11ff0ba7674035d0f4444a8716d4087" protocol=ttrpc version=3 Sep 3 23:26:24.058305 systemd[1]: Started cri-containerd-bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63.scope - libcontainer container bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63. Sep 3 23:26:24.081481 containerd[1519]: time="2025-09-03T23:26:24.081449813Z" level=info msg="StartContainer for \"bd18968f4a27f217084269217566260604b335d5de8541f992c9226901c8ce63\" returns successfully" Sep 3 23:26:24.460752 kubelet[2618]: I0903 23:26:24.460499 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-4wrjf" podStartSLOduration=1.738379159 podStartE2EDuration="5.460483908s" podCreationTimestamp="2025-09-03 23:26:19 +0000 UTC" firstStartedPulling="2025-09-03 23:26:20.300521078 +0000 UTC m=+6.970443825" lastFinishedPulling="2025-09-03 23:26:24.022625787 +0000 UTC m=+10.692548574" observedRunningTime="2025-09-03 23:26:24.460031099 +0000 UTC m=+11.129953886" watchObservedRunningTime="2025-09-03 23:26:24.460483908 +0000 UTC m=+11.130406695" Sep 3 23:26:28.695516 kubelet[2618]: E0903 23:26:28.695437 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:29.168115 sudo[1712]: pam_unix(sudo:session): session closed for user root Sep 3 23:26:29.171235 sshd[1711]: Connection closed by 10.0.0.1 port 59986 Sep 3 23:26:29.172002 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:29.176460 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:59986.service: Deactivated successfully. Sep 3 23:26:29.179946 systemd[1]: session-7.scope: Deactivated successfully. Sep 3 23:26:29.180329 systemd[1]: session-7.scope: Consumed 6.801s CPU time, 231.9M memory peak. Sep 3 23:26:29.182128 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Sep 3 23:26:29.184486 systemd-logind[1491]: Removed session 7. Sep 3 23:26:29.558268 update_engine[1499]: I20250903 23:26:29.558197 1499 update_attempter.cc:509] Updating boot flags... Sep 3 23:26:34.463709 systemd[1]: Created slice kubepods-besteffort-pod05279b24_ca67_41af_81c0_28b8e21fdc31.slice - libcontainer container kubepods-besteffort-pod05279b24_ca67_41af_81c0_28b8e21fdc31.slice. Sep 3 23:26:34.534897 kubelet[2618]: I0903 23:26:34.534858 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05279b24-ca67-41af-81c0-28b8e21fdc31-tigera-ca-bundle\") pod \"calico-typha-7fb87bfc74-fcwqf\" (UID: \"05279b24-ca67-41af-81c0-28b8e21fdc31\") " pod="calico-system/calico-typha-7fb87bfc74-fcwqf" Sep 3 23:26:34.534897 kubelet[2618]: I0903 23:26:34.534901 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05279b24-ca67-41af-81c0-28b8e21fdc31-typha-certs\") pod \"calico-typha-7fb87bfc74-fcwqf\" (UID: \"05279b24-ca67-41af-81c0-28b8e21fdc31\") " pod="calico-system/calico-typha-7fb87bfc74-fcwqf" Sep 3 23:26:34.535561 kubelet[2618]: I0903 23:26:34.534921 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshdb\" (UniqueName: \"kubernetes.io/projected/05279b24-ca67-41af-81c0-28b8e21fdc31-kube-api-access-rshdb\") pod \"calico-typha-7fb87bfc74-fcwqf\" (UID: \"05279b24-ca67-41af-81c0-28b8e21fdc31\") " pod="calico-system/calico-typha-7fb87bfc74-fcwqf" Sep 3 23:26:34.632252 systemd[1]: Created slice kubepods-besteffort-podfa4c4302_1211_47d9_ac1c_2c37defda7c7.slice - libcontainer container kubepods-besteffort-podfa4c4302_1211_47d9_ac1c_2c37defda7c7.slice. Sep 3 23:26:34.735959 kubelet[2618]: I0903 23:26:34.735850 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgrd\" (UniqueName: \"kubernetes.io/projected/fa4c4302-1211-47d9-ac1c-2c37defda7c7-kube-api-access-rkgrd\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.735959 kubelet[2618]: I0903 23:26:34.735893 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-cni-bin-dir\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.735959 kubelet[2618]: I0903 23:26:34.735911 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-cni-log-dir\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.735959 kubelet[2618]: I0903 23:26:34.735926 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-policysync\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.735959 kubelet[2618]: I0903 23:26:34.735944 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-lib-modules\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736155 kubelet[2618]: I0903 23:26:34.735965 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa4c4302-1211-47d9-ac1c-2c37defda7c7-tigera-ca-bundle\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736155 kubelet[2618]: I0903 23:26:34.735991 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-flexvol-driver-host\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736155 kubelet[2618]: I0903 23:26:34.736006 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa4c4302-1211-47d9-ac1c-2c37defda7c7-node-certs\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736155 kubelet[2618]: I0903 23:26:34.736019 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-xtables-lock\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736155 kubelet[2618]: I0903 23:26:34.736035 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-cni-net-dir\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736259 kubelet[2618]: I0903 23:26:34.736051 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-var-lib-calico\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.736259 kubelet[2618]: I0903 23:26:34.736066 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa4c4302-1211-47d9-ac1c-2c37defda7c7-var-run-calico\") pod \"calico-node-jb2bd\" (UID: \"fa4c4302-1211-47d9-ac1c-2c37defda7c7\") " pod="calico-system/calico-node-jb2bd" Sep 3 23:26:34.769126 kubelet[2618]: E0903 23:26:34.769091 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:34.770847 containerd[1519]: time="2025-09-03T23:26:34.770793713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb87bfc74-fcwqf,Uid:05279b24-ca67-41af-81c0-28b8e21fdc31,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:34.831901 containerd[1519]: time="2025-09-03T23:26:34.831849719Z" level=info msg="connecting to shim 9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b" address="unix:///run/containerd/s/f7f7adcd3bbcf7bf2d912e45d9dcfa0fb25a1b793a01e5f398b4297e641753a0" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:34.838225 kubelet[2618]: E0903 23:26:34.838183 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.838225 kubelet[2618]: W0903 23:26:34.838218 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.839845 kubelet[2618]: E0903 23:26:34.839803 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.845854 kubelet[2618]: E0903 23:26:34.845196 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.845854 kubelet[2618]: W0903 23:26:34.845215 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.845854 kubelet[2618]: E0903 23:26:34.845231 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.858161 kubelet[2618]: E0903 23:26:34.858057 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.858161 kubelet[2618]: W0903 23:26:34.858076 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.858161 kubelet[2618]: E0903 23:26:34.858090 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.869437 kubelet[2618]: E0903 23:26:34.869212 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:34.897344 systemd[1]: Started cri-containerd-9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b.scope - libcontainer container 9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b. Sep 3 23:26:34.914595 kubelet[2618]: E0903 23:26:34.914544 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.914595 kubelet[2618]: W0903 23:26:34.914574 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.915340 kubelet[2618]: E0903 23:26:34.915310 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.917279 kubelet[2618]: E0903 23:26:34.917249 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.924177 kubelet[2618]: W0903 23:26:34.917271 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.924260 kubelet[2618]: E0903 23:26:34.924188 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.925272 kubelet[2618]: E0903 23:26:34.925249 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.925272 kubelet[2618]: W0903 23:26:34.925268 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.925366 kubelet[2618]: E0903 23:26:34.925286 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.925668 kubelet[2618]: E0903 23:26:34.925637 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.926194 kubelet[2618]: W0903 23:26:34.925663 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.926194 kubelet[2618]: E0903 23:26:34.926194 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.926838 kubelet[2618]: E0903 23:26:34.926806 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.926838 kubelet[2618]: W0903 23:26:34.926825 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.926838 kubelet[2618]: E0903 23:26:34.926840 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.927217 kubelet[2618]: E0903 23:26:34.927199 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.927217 kubelet[2618]: W0903 23:26:34.927213 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.927287 kubelet[2618]: E0903 23:26:34.927225 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.927373 kubelet[2618]: E0903 23:26:34.927356 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.928210 kubelet[2618]: W0903 23:26:34.927366 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.928210 kubelet[2618]: E0903 23:26:34.928209 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.928757 kubelet[2618]: E0903 23:26:34.928725 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.928757 kubelet[2618]: W0903 23:26:34.928742 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.928757 kubelet[2618]: E0903 23:26:34.928753 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.928932 kubelet[2618]: E0903 23:26:34.928912 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.928932 kubelet[2618]: W0903 23:26:34.928925 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.928994 kubelet[2618]: E0903 23:26:34.928935 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.929405 kubelet[2618]: E0903 23:26:34.929377 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.929405 kubelet[2618]: W0903 23:26:34.929396 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.929479 kubelet[2618]: E0903 23:26:34.929409 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.930365 kubelet[2618]: E0903 23:26:34.930337 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.930365 kubelet[2618]: W0903 23:26:34.930355 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.930365 kubelet[2618]: E0903 23:26:34.930367 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.930538 kubelet[2618]: E0903 23:26:34.930514 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.930538 kubelet[2618]: W0903 23:26:34.930531 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.930538 kubelet[2618]: E0903 23:26:34.930539 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.930726 kubelet[2618]: E0903 23:26:34.930706 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.930726 kubelet[2618]: W0903 23:26:34.930718 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.930726 kubelet[2618]: E0903 23:26:34.930727 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.930850 kubelet[2618]: E0903 23:26:34.930834 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.930850 kubelet[2618]: W0903 23:26:34.930843 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.930850 kubelet[2618]: E0903 23:26:34.930850 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.933411 kubelet[2618]: E0903 23:26:34.933283 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.933411 kubelet[2618]: W0903 23:26:34.933404 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.933543 kubelet[2618]: E0903 23:26:34.933418 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.933724 kubelet[2618]: E0903 23:26:34.933705 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.933724 kubelet[2618]: W0903 23:26:34.933718 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.933917 kubelet[2618]: E0903 23:26:34.933895 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.934205 kubelet[2618]: E0903 23:26:34.934189 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.934205 kubelet[2618]: W0903 23:26:34.934203 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.934278 kubelet[2618]: E0903 23:26:34.934214 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.935262 kubelet[2618]: E0903 23:26:34.935236 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.935262 kubelet[2618]: W0903 23:26:34.935253 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.935262 kubelet[2618]: E0903 23:26:34.935267 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.935431 kubelet[2618]: E0903 23:26:34.935414 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.935431 kubelet[2618]: W0903 23:26:34.935425 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.935431 kubelet[2618]: E0903 23:26:34.935432 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.936135 kubelet[2618]: E0903 23:26:34.936103 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.936135 kubelet[2618]: W0903 23:26:34.936127 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.936135 kubelet[2618]: E0903 23:26:34.936139 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.938619 kubelet[2618]: E0903 23:26:34.938584 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.938619 kubelet[2618]: W0903 23:26:34.938606 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.938619 kubelet[2618]: E0903 23:26:34.938621 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.938754 kubelet[2618]: I0903 23:26:34.938649 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lltps\" (UniqueName: \"kubernetes.io/projected/18b1e484-37ee-425c-815e-d87a59135b42-kube-api-access-lltps\") pod \"csi-node-driver-f6s7q\" (UID: \"18b1e484-37ee-425c-815e-d87a59135b42\") " pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:34.938870 kubelet[2618]: E0903 23:26:34.938851 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.939087 kubelet[2618]: W0903 23:26:34.938997 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.939087 kubelet[2618]: E0903 23:26:34.939023 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.939087 kubelet[2618]: I0903 23:26:34.939049 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18b1e484-37ee-425c-815e-d87a59135b42-registration-dir\") pod \"csi-node-driver-f6s7q\" (UID: \"18b1e484-37ee-425c-815e-d87a59135b42\") " pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:34.960982 kubelet[2618]: E0903 23:26:34.960833 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.960982 kubelet[2618]: W0903 23:26:34.960866 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.960982 kubelet[2618]: E0903 23:26:34.960896 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.962688 kubelet[2618]: E0903 23:26:34.962665 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.962852 kubelet[2618]: W0903 23:26:34.962775 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.962911 kubelet[2618]: E0903 23:26:34.962848 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.963127 kubelet[2618]: E0903 23:26:34.963114 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.963219 kubelet[2618]: W0903 23:26:34.963207 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.963463 kubelet[2618]: E0903 23:26:34.963450 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.963539 kubelet[2618]: W0903 23:26:34.963528 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.963618 kubelet[2618]: E0903 23:26:34.963588 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.963692 containerd[1519]: time="2025-09-03T23:26:34.963641400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb2bd,Uid:fa4c4302-1211-47d9-ac1c-2c37defda7c7,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:34.963900 kubelet[2618]: E0903 23:26:34.963887 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.963966 kubelet[2618]: W0903 23:26:34.963955 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.964024 kubelet[2618]: E0903 23:26:34.964013 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.964078 kubelet[2618]: E0903 23:26:34.963924 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.964185 kubelet[2618]: I0903 23:26:34.964142 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18b1e484-37ee-425c-815e-d87a59135b42-kubelet-dir\") pod \"csi-node-driver-f6s7q\" (UID: \"18b1e484-37ee-425c-815e-d87a59135b42\") " pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:34.964404 kubelet[2618]: E0903 23:26:34.964387 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.964404 kubelet[2618]: W0903 23:26:34.964402 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.964471 kubelet[2618]: E0903 23:26:34.964419 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.964645 kubelet[2618]: E0903 23:26:34.964631 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.964645 kubelet[2618]: W0903 23:26:34.964644 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.964720 kubelet[2618]: E0903 23:26:34.964667 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.964878 kubelet[2618]: E0903 23:26:34.964862 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.964920 kubelet[2618]: W0903 23:26:34.964877 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.964920 kubelet[2618]: E0903 23:26:34.964888 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.964920 kubelet[2618]: I0903 23:26:34.964911 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/18b1e484-37ee-425c-815e-d87a59135b42-varrun\") pod \"csi-node-driver-f6s7q\" (UID: \"18b1e484-37ee-425c-815e-d87a59135b42\") " pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:34.965282 kubelet[2618]: E0903 23:26:34.965136 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.965282 kubelet[2618]: W0903 23:26:34.965156 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.965282 kubelet[2618]: E0903 23:26:34.965171 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.965282 kubelet[2618]: I0903 23:26:34.965187 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18b1e484-37ee-425c-815e-d87a59135b42-socket-dir\") pod \"csi-node-driver-f6s7q\" (UID: \"18b1e484-37ee-425c-815e-d87a59135b42\") " pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:34.965405 kubelet[2618]: E0903 23:26:34.965393 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.965431 kubelet[2618]: W0903 23:26:34.965407 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.965431 kubelet[2618]: E0903 23:26:34.965417 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.965576 kubelet[2618]: E0903 23:26:34.965560 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.965576 kubelet[2618]: W0903 23:26:34.965573 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.966028 kubelet[2618]: E0903 23:26:34.965588 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.966028 kubelet[2618]: E0903 23:26:34.965775 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.966028 kubelet[2618]: W0903 23:26:34.965784 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.966028 kubelet[2618]: E0903 23:26:34.965794 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.966028 kubelet[2618]: E0903 23:26:34.965979 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:34.966028 kubelet[2618]: W0903 23:26:34.965986 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:34.966028 kubelet[2618]: E0903 23:26:34.965994 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:34.980331 containerd[1519]: time="2025-09-03T23:26:34.980279295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb87bfc74-fcwqf,Uid:05279b24-ca67-41af-81c0-28b8e21fdc31,Namespace:calico-system,Attempt:0,} returns sandbox id \"9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b\"" Sep 3 23:26:34.981067 containerd[1519]: time="2025-09-03T23:26:34.981040505Z" level=info msg="connecting to shim 93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2" address="unix:///run/containerd/s/ba6f417e62a4113951e50b736d0b72688286b457dcf2b8737c8bded6992360f0" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:34.981889 kubelet[2618]: E0903 23:26:34.981865 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:34.986888 containerd[1519]: time="2025-09-03T23:26:34.986800629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 3 23:26:35.015319 systemd[1]: Started cri-containerd-93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2.scope - libcontainer container 93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2. Sep 3 23:26:35.036378 containerd[1519]: time="2025-09-03T23:26:35.036314750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jb2bd,Uid:fa4c4302-1211-47d9-ac1c-2c37defda7c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\"" Sep 3 23:26:35.067078 kubelet[2618]: E0903 23:26:35.067047 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.067078 kubelet[2618]: W0903 23:26:35.067070 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.067078 kubelet[2618]: E0903 23:26:35.067090 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.067762 kubelet[2618]: E0903 23:26:35.067714 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.067762 kubelet[2618]: W0903 23:26:35.067761 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.067897 kubelet[2618]: E0903 23:26:35.067782 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.068007 kubelet[2618]: E0903 23:26:35.067975 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.068007 kubelet[2618]: W0903 23:26:35.067987 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.068007 kubelet[2618]: E0903 23:26:35.067998 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.068496 kubelet[2618]: E0903 23:26:35.068390 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.068615 kubelet[2618]: W0903 23:26:35.068597 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.068789 kubelet[2618]: E0903 23:26:35.068683 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.069746 kubelet[2618]: E0903 23:26:35.069707 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.069746 kubelet[2618]: W0903 23:26:35.069724 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.069746 kubelet[2618]: E0903 23:26:35.069743 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.070426 kubelet[2618]: E0903 23:26:35.070403 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.070526 kubelet[2618]: W0903 23:26:35.070419 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.070526 kubelet[2618]: E0903 23:26:35.070464 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.070968 kubelet[2618]: E0903 23:26:35.070932 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.070968 kubelet[2618]: W0903 23:26:35.070958 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.071040 kubelet[2618]: E0903 23:26:35.071022 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.071545 kubelet[2618]: E0903 23:26:35.071525 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.071545 kubelet[2618]: W0903 23:26:35.071542 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.071683 kubelet[2618]: E0903 23:26:35.071597 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.072094 kubelet[2618]: E0903 23:26:35.072073 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.072094 kubelet[2618]: W0903 23:26:35.072092 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.072377 kubelet[2618]: E0903 23:26:35.072337 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.072514 kubelet[2618]: E0903 23:26:35.072495 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.072514 kubelet[2618]: W0903 23:26:35.072508 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.072514 kubelet[2618]: E0903 23:26:35.072543 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.072799 kubelet[2618]: E0903 23:26:35.072650 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.072799 kubelet[2618]: W0903 23:26:35.072657 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.072799 kubelet[2618]: E0903 23:26:35.072696 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.073174 kubelet[2618]: E0903 23:26:35.072814 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.073174 kubelet[2618]: W0903 23:26:35.072822 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.073174 kubelet[2618]: E0903 23:26:35.072938 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.073174 kubelet[2618]: E0903 23:26:35.073068 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.073174 kubelet[2618]: W0903 23:26:35.073077 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.073294 kubelet[2618]: E0903 23:26:35.073181 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.073600 kubelet[2618]: E0903 23:26:35.073579 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.073600 kubelet[2618]: W0903 23:26:35.073595 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.073824 kubelet[2618]: E0903 23:26:35.073735 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.073850 kubelet[2618]: E0903 23:26:35.073840 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.073873 kubelet[2618]: W0903 23:26:35.073852 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.073873 kubelet[2618]: E0903 23:26:35.073869 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.074350 kubelet[2618]: E0903 23:26:35.074327 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.074350 kubelet[2618]: W0903 23:26:35.074346 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.074824 kubelet[2618]: E0903 23:26:35.074364 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.074824 kubelet[2618]: E0903 23:26:35.074730 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.074824 kubelet[2618]: W0903 23:26:35.074742 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.074824 kubelet[2618]: E0903 23:26:35.074753 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.076019 kubelet[2618]: E0903 23:26:35.075998 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.076019 kubelet[2618]: W0903 23:26:35.076013 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.076197 kubelet[2618]: E0903 23:26:35.076085 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.076221 kubelet[2618]: E0903 23:26:35.076198 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.076221 kubelet[2618]: W0903 23:26:35.076210 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.076397 kubelet[2618]: E0903 23:26:35.076322 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.076684 kubelet[2618]: E0903 23:26:35.076412 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.076684 kubelet[2618]: W0903 23:26:35.076423 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.076684 kubelet[2618]: E0903 23:26:35.076506 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.077535 kubelet[2618]: E0903 23:26:35.077517 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.077535 kubelet[2618]: W0903 23:26:35.077533 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.077879 kubelet[2618]: E0903 23:26:35.077603 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.078249 kubelet[2618]: E0903 23:26:35.078226 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.078249 kubelet[2618]: W0903 23:26:35.078242 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.078485 kubelet[2618]: E0903 23:26:35.078405 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.078485 kubelet[2618]: E0903 23:26:35.078417 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.078485 kubelet[2618]: W0903 23:26:35.078426 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.078485 kubelet[2618]: E0903 23:26:35.078441 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.078974 kubelet[2618]: E0903 23:26:35.078927 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.078974 kubelet[2618]: W0903 23:26:35.078946 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.078974 kubelet[2618]: E0903 23:26:35.078959 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.079693 kubelet[2618]: E0903 23:26:35.079664 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.079693 kubelet[2618]: W0903 23:26:35.079689 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.079782 kubelet[2618]: E0903 23:26:35.079703 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.091752 kubelet[2618]: E0903 23:26:35.091685 2618 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 3 23:26:35.091752 kubelet[2618]: W0903 23:26:35.091706 2618 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 3 23:26:35.091752 kubelet[2618]: E0903 23:26:35.091721 2618 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 3 23:26:35.933119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950479670.mount: Deactivated successfully. Sep 3 23:26:36.365700 containerd[1519]: time="2025-09-03T23:26:36.365652101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:36.366176 containerd[1519]: time="2025-09-03T23:26:36.366137434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 3 23:26:36.366980 containerd[1519]: time="2025-09-03T23:26:36.366939641Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:36.369389 containerd[1519]: time="2025-09-03T23:26:36.369071111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:36.369703 containerd[1519]: time="2025-09-03T23:26:36.369659615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.382817261s" Sep 3 23:26:36.369703 containerd[1519]: time="2025-09-03T23:26:36.369689618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 3 23:26:36.372635 containerd[1519]: time="2025-09-03T23:26:36.372537246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 3 23:26:36.384528 containerd[1519]: time="2025-09-03T23:26:36.384263035Z" level=info msg="CreateContainer within sandbox \"9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 3 23:26:36.413453 containerd[1519]: time="2025-09-03T23:26:36.413399869Z" level=info msg="Container 101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:36.419054 containerd[1519]: time="2025-09-03T23:26:36.419012196Z" level=info msg="CreateContainer within sandbox \"9038c4dda9603cf881159c2362cccc336d8b7e5c92c4eddc4d43f342c6702a3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c\"" Sep 3 23:26:36.419548 containerd[1519]: time="2025-09-03T23:26:36.419515771Z" level=info msg="StartContainer for \"101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c\"" Sep 3 23:26:36.420476 containerd[1519]: time="2025-09-03T23:26:36.420453472Z" level=info msg="connecting to shim 101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c" address="unix:///run/containerd/s/f7f7adcd3bbcf7bf2d912e45d9dcfa0fb25a1b793a01e5f398b4297e641753a0" protocol=ttrpc version=3 Sep 3 23:26:36.422537 kubelet[2618]: E0903 23:26:36.422489 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:36.455339 systemd[1]: Started cri-containerd-101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c.scope - libcontainer container 101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c. Sep 3 23:26:36.528063 containerd[1519]: time="2025-09-03T23:26:36.527997471Z" level=info msg="StartContainer for \"101f5d009b7d6b8a0214e621b6acdc09cbe536ae5d64e736982daab0f63d1c8c\" returns successfully" Sep 3 23:26:37.276392 containerd[1519]: time="2025-09-03T23:26:37.276044406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:37.276520 containerd[1519]: time="2025-09-03T23:26:37.276416205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 3 23:26:37.277419 containerd[1519]: time="2025-09-03T23:26:37.277383265Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:37.279825 containerd[1519]: time="2025-09-03T23:26:37.279767832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:37.281384 containerd[1519]: time="2025-09-03T23:26:37.281357516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 908.784786ms" Sep 3 23:26:37.281492 containerd[1519]: time="2025-09-03T23:26:37.281417802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 3 23:26:37.283522 containerd[1519]: time="2025-09-03T23:26:37.283489417Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 3 23:26:37.313414 containerd[1519]: time="2025-09-03T23:26:37.313373509Z" level=info msg="Container 274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:37.324338 containerd[1519]: time="2025-09-03T23:26:37.324058495Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\"" Sep 3 23:26:37.324646 containerd[1519]: time="2025-09-03T23:26:37.324603912Z" level=info msg="StartContainer for \"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\"" Sep 3 23:26:37.326379 containerd[1519]: time="2025-09-03T23:26:37.326293966Z" level=info msg="connecting to shim 274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916" address="unix:///run/containerd/s/ba6f417e62a4113951e50b736d0b72688286b457dcf2b8737c8bded6992360f0" protocol=ttrpc version=3 Sep 3 23:26:37.345331 systemd[1]: Started cri-containerd-274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916.scope - libcontainer container 274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916. Sep 3 23:26:37.385326 containerd[1519]: time="2025-09-03T23:26:37.385290192Z" level=info msg="StartContainer for \"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\" returns successfully" Sep 3 23:26:37.395973 systemd[1]: cri-containerd-274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916.scope: Deactivated successfully. Sep 3 23:26:37.429878 containerd[1519]: time="2025-09-03T23:26:37.429599337Z" level=info msg="received exit event container_id:\"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\" id:\"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\" pid:3280 exited_at:{seconds:1756941997 nanos:411362330}" Sep 3 23:26:37.431785 containerd[1519]: time="2025-09-03T23:26:37.431736798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\" id:\"274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916\" pid:3280 exited_at:{seconds:1756941997 nanos:411362330}" Sep 3 23:26:37.494188 kubelet[2618]: E0903 23:26:37.490549 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:37.491044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-274db71e1407ac5845442fe847b14403373075d96463d958ffd8fb49223dc916-rootfs.mount: Deactivated successfully. Sep 3 23:26:38.421826 kubelet[2618]: E0903 23:26:38.421773 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:38.497798 containerd[1519]: time="2025-09-03T23:26:38.497708101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 3 23:26:38.504548 kubelet[2618]: I0903 23:26:38.504507 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:26:38.504844 kubelet[2618]: E0903 23:26:38.504814 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:38.516668 kubelet[2618]: I0903 23:26:38.516615 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fb87bfc74-fcwqf" podStartSLOduration=3.130705095 podStartE2EDuration="4.51657589s" podCreationTimestamp="2025-09-03 23:26:34 +0000 UTC" firstStartedPulling="2025-09-03 23:26:34.986529117 +0000 UTC m=+21.656451904" lastFinishedPulling="2025-09-03 23:26:36.372399912 +0000 UTC m=+23.042322699" observedRunningTime="2025-09-03 23:26:37.520841099 +0000 UTC m=+24.190763886" watchObservedRunningTime="2025-09-03 23:26:38.51657589 +0000 UTC m=+25.186498717" Sep 3 23:26:40.422118 kubelet[2618]: E0903 23:26:40.422070 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:42.159905 containerd[1519]: time="2025-09-03T23:26:42.159863998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:42.160768 containerd[1519]: time="2025-09-03T23:26:42.160461848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 3 23:26:42.161608 containerd[1519]: time="2025-09-03T23:26:42.161580462Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:42.163063 containerd[1519]: time="2025-09-03T23:26:42.163019823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:42.164075 containerd[1519]: time="2025-09-03T23:26:42.164037508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.666293604s" Sep 3 23:26:42.164075 containerd[1519]: time="2025-09-03T23:26:42.164069631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 3 23:26:42.167261 containerd[1519]: time="2025-09-03T23:26:42.167232056Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 3 23:26:42.175434 containerd[1519]: time="2025-09-03T23:26:42.173960941Z" level=info msg="Container 92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:42.182212 containerd[1519]: time="2025-09-03T23:26:42.182167549Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\"" Sep 3 23:26:42.182874 containerd[1519]: time="2025-09-03T23:26:42.182794561Z" level=info msg="StartContainer for \"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\"" Sep 3 23:26:42.184635 containerd[1519]: time="2025-09-03T23:26:42.184563870Z" level=info msg="connecting to shim 92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565" address="unix:///run/containerd/s/ba6f417e62a4113951e50b736d0b72688286b457dcf2b8737c8bded6992360f0" protocol=ttrpc version=3 Sep 3 23:26:42.207330 systemd[1]: Started cri-containerd-92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565.scope - libcontainer container 92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565. Sep 3 23:26:42.245826 containerd[1519]: time="2025-09-03T23:26:42.245778323Z" level=info msg="StartContainer for \"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\" returns successfully" Sep 3 23:26:42.422572 kubelet[2618]: E0903 23:26:42.422114 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:42.756316 containerd[1519]: time="2025-09-03T23:26:42.756204169Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 3 23:26:42.758707 systemd[1]: cri-containerd-92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565.scope: Deactivated successfully. Sep 3 23:26:42.758967 systemd[1]: cri-containerd-92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565.scope: Consumed 451ms CPU time, 177M memory peak, 3.1M read from disk, 165.8M written to disk. Sep 3 23:26:42.760582 containerd[1519]: time="2025-09-03T23:26:42.760548573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\" id:\"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\" pid:3341 exited_at:{seconds:1756942002 nanos:760207424}" Sep 3 23:26:42.764749 containerd[1519]: time="2025-09-03T23:26:42.764711362Z" level=info msg="received exit event container_id:\"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\" id:\"92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565\" pid:3341 exited_at:{seconds:1756942002 nanos:760207424}" Sep 3 23:26:42.786358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92743b84f1e77dfb704a9e8fab4ef98876fbd2c8cce3faaf6e78a9b1e6668565-rootfs.mount: Deactivated successfully. Sep 3 23:26:42.838488 kubelet[2618]: I0903 23:26:42.837922 2618 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 3 23:26:42.884931 systemd[1]: Created slice kubepods-burstable-pod71e565b3_c93f_4af8_84a4_0264261cfed3.slice - libcontainer container kubepods-burstable-pod71e565b3_c93f_4af8_84a4_0264261cfed3.slice. Sep 3 23:26:42.895047 systemd[1]: Created slice kubepods-besteffort-podc33a11cf_fb3f_4fd0_a644_0bde83837f45.slice - libcontainer container kubepods-besteffort-podc33a11cf_fb3f_4fd0_a644_0bde83837f45.slice. Sep 3 23:26:42.904631 systemd[1]: Created slice kubepods-besteffort-pod39ec3e7e_ce0d_4dbe_a09f_c5e0f1a6ecc5.slice - libcontainer container kubepods-besteffort-pod39ec3e7e_ce0d_4dbe_a09f_c5e0f1a6ecc5.slice. Sep 3 23:26:42.916335 systemd[1]: Created slice kubepods-besteffort-pod315871cc_207c_4bc8_8418_826b1e40bbea.slice - libcontainer container kubepods-besteffort-pod315871cc_207c_4bc8_8418_826b1e40bbea.slice. Sep 3 23:26:42.929159 kubelet[2618]: I0903 23:26:42.929124 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71e565b3-c93f-4af8-84a4-0264261cfed3-config-volume\") pod \"coredns-668d6bf9bc-7tk4d\" (UID: \"71e565b3-c93f-4af8-84a4-0264261cfed3\") " pod="kube-system/coredns-668d6bf9bc-7tk4d" Sep 3 23:26:42.929929 kubelet[2618]: I0903 23:26:42.929899 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/315871cc-207c-4bc8-8418-826b1e40bbea-tigera-ca-bundle\") pod \"calico-kube-controllers-5664bd6fb6-wx2wj\" (UID: \"315871cc-207c-4bc8-8418-826b1e40bbea\") " pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" Sep 3 23:26:42.929993 kubelet[2618]: I0903 23:26:42.929938 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgrn\" (UniqueName: \"kubernetes.io/projected/c33a11cf-fb3f-4fd0-a644-0bde83837f45-kube-api-access-rqgrn\") pod \"calico-apiserver-6d59cd756f-lfnp9\" (UID: \"c33a11cf-fb3f-4fd0-a644-0bde83837f45\") " pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" Sep 3 23:26:42.929993 kubelet[2618]: I0903 23:26:42.929979 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2edcb6a9-5ded-4011-9542-41e5097f9c68-calico-apiserver-certs\") pod \"calico-apiserver-6d59cd756f-wbr7d\" (UID: \"2edcb6a9-5ded-4011-9542-41e5097f9c68\") " pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" Sep 3 23:26:42.930039 kubelet[2618]: I0903 23:26:42.929998 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49xt\" (UniqueName: \"kubernetes.io/projected/2edcb6a9-5ded-4011-9542-41e5097f9c68-kube-api-access-n49xt\") pod \"calico-apiserver-6d59cd756f-wbr7d\" (UID: \"2edcb6a9-5ded-4011-9542-41e5097f9c68\") " pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" Sep 3 23:26:42.930039 kubelet[2618]: I0903 23:26:42.930018 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsh4\" (UniqueName: \"kubernetes.io/projected/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-kube-api-access-ptsh4\") pod \"whisker-6c68b9c757-zqz8d\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " pod="calico-system/whisker-6c68b9c757-zqz8d" Sep 3 23:26:42.930081 kubelet[2618]: I0903 23:26:42.930036 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnrq\" (UniqueName: \"kubernetes.io/projected/315871cc-207c-4bc8-8418-826b1e40bbea-kube-api-access-vqnrq\") pod \"calico-kube-controllers-5664bd6fb6-wx2wj\" (UID: \"315871cc-207c-4bc8-8418-826b1e40bbea\") " pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" Sep 3 23:26:42.930081 kubelet[2618]: I0903 23:26:42.930056 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvp47\" (UniqueName: \"kubernetes.io/projected/71e565b3-c93f-4af8-84a4-0264261cfed3-kube-api-access-zvp47\") pod \"coredns-668d6bf9bc-7tk4d\" (UID: \"71e565b3-c93f-4af8-84a4-0264261cfed3\") " pod="kube-system/coredns-668d6bf9bc-7tk4d" Sep 3 23:26:42.930081 kubelet[2618]: I0903 23:26:42.930074 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcdef1c-95ea-4127-a5a9-56fdc7574efb-config-volume\") pod \"coredns-668d6bf9bc-hxr7s\" (UID: \"cdcdef1c-95ea-4127-a5a9-56fdc7574efb\") " pod="kube-system/coredns-668d6bf9bc-hxr7s" Sep 3 23:26:42.930167 kubelet[2618]: I0903 23:26:42.930089 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t47p\" (UniqueName: \"kubernetes.io/projected/cdcdef1c-95ea-4127-a5a9-56fdc7574efb-kube-api-access-6t47p\") pod \"coredns-668d6bf9bc-hxr7s\" (UID: \"cdcdef1c-95ea-4127-a5a9-56fdc7574efb\") " pod="kube-system/coredns-668d6bf9bc-hxr7s" Sep 3 23:26:42.930167 kubelet[2618]: I0903 23:26:42.930107 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9s7\" (UniqueName: \"kubernetes.io/projected/e3f5b663-d94b-4b50-8439-ece6c594b74a-kube-api-access-ck9s7\") pod \"goldmane-54d579b49d-hldrj\" (UID: \"e3f5b663-d94b-4b50-8439-ece6c594b74a\") " pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:42.930167 kubelet[2618]: I0903 23:26:42.930125 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-backend-key-pair\") pod \"whisker-6c68b9c757-zqz8d\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " pod="calico-system/whisker-6c68b9c757-zqz8d" Sep 3 23:26:42.931075 kubelet[2618]: I0903 23:26:42.930142 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c33a11cf-fb3f-4fd0-a644-0bde83837f45-calico-apiserver-certs\") pod \"calico-apiserver-6d59cd756f-lfnp9\" (UID: \"c33a11cf-fb3f-4fd0-a644-0bde83837f45\") " pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" Sep 3 23:26:42.931153 kubelet[2618]: I0903 23:26:42.931117 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3f5b663-d94b-4b50-8439-ece6c594b74a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hldrj\" (UID: \"e3f5b663-d94b-4b50-8439-ece6c594b74a\") " pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:42.931153 kubelet[2618]: I0903 23:26:42.931137 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-ca-bundle\") pod \"whisker-6c68b9c757-zqz8d\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " pod="calico-system/whisker-6c68b9c757-zqz8d" Sep 3 23:26:42.931211 kubelet[2618]: I0903 23:26:42.931199 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f5b663-d94b-4b50-8439-ece6c594b74a-config\") pod \"goldmane-54d579b49d-hldrj\" (UID: \"e3f5b663-d94b-4b50-8439-ece6c594b74a\") " pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:42.931241 kubelet[2618]: I0903 23:26:42.931216 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e3f5b663-d94b-4b50-8439-ece6c594b74a-goldmane-key-pair\") pod \"goldmane-54d579b49d-hldrj\" (UID: \"e3f5b663-d94b-4b50-8439-ece6c594b74a\") " pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:42.933540 systemd[1]: Created slice kubepods-besteffort-pod2edcb6a9_5ded_4011_9542_41e5097f9c68.slice - libcontainer container kubepods-besteffort-pod2edcb6a9_5ded_4011_9542_41e5097f9c68.slice. Sep 3 23:26:42.934959 systemd[1]: Created slice kubepods-besteffort-pode3f5b663_d94b_4b50_8439_ece6c594b74a.slice - libcontainer container kubepods-besteffort-pode3f5b663_d94b_4b50_8439_ece6c594b74a.slice. Sep 3 23:26:42.941427 systemd[1]: Created slice kubepods-burstable-podcdcdef1c_95ea_4127_a5a9_56fdc7574efb.slice - libcontainer container kubepods-burstable-podcdcdef1c_95ea_4127_a5a9_56fdc7574efb.slice. Sep 3 23:26:43.190761 kubelet[2618]: E0903 23:26:43.190716 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:43.191652 containerd[1519]: time="2025-09-03T23:26:43.191387132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7tk4d,Uid:71e565b3-c93f-4af8-84a4-0264261cfed3,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:43.201346 containerd[1519]: time="2025-09-03T23:26:43.201307452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-lfnp9,Uid:c33a11cf-fb3f-4fd0-a644-0bde83837f45,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:26:43.214813 containerd[1519]: time="2025-09-03T23:26:43.214672049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c68b9c757-zqz8d,Uid:39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:43.235276 containerd[1519]: time="2025-09-03T23:26:43.235167022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5664bd6fb6-wx2wj,Uid:315871cc-207c-4bc8-8418-826b1e40bbea,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:43.239571 containerd[1519]: time="2025-09-03T23:26:43.239530334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-wbr7d,Uid:2edcb6a9-5ded-4011-9542-41e5097f9c68,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:26:43.239938 containerd[1519]: time="2025-09-03T23:26:43.239720189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hldrj,Uid:e3f5b663-d94b-4b50-8439-ece6c594b74a,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:43.244406 kubelet[2618]: E0903 23:26:43.244374 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:43.244878 containerd[1519]: time="2025-09-03T23:26:43.244847083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxr7s,Uid:cdcdef1c-95ea-4127-a5a9-56fdc7574efb,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:43.328163 containerd[1519]: time="2025-09-03T23:26:43.328016670Z" level=error msg="Failed to destroy network for sandbox \"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.330944 containerd[1519]: time="2025-09-03T23:26:43.330868540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-wbr7d,Uid:2edcb6a9-5ded-4011-9542-41e5097f9c68,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.332599 kubelet[2618]: E0903 23:26:43.332551 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.333593 kubelet[2618]: E0903 23:26:43.333262 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" Sep 3 23:26:43.333593 kubelet[2618]: E0903 23:26:43.333300 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" Sep 3 23:26:43.333593 kubelet[2618]: E0903 23:26:43.333343 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d59cd756f-wbr7d_calico-apiserver(2edcb6a9-5ded-4011-9542-41e5097f9c68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d59cd756f-wbr7d_calico-apiserver(2edcb6a9-5ded-4011-9542-41e5097f9c68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"694bccf598d9c85b689bb997f72066eaf9f61f944d3891a171837f20a3f71f9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" podUID="2edcb6a9-5ded-4011-9542-41e5097f9c68" Sep 3 23:26:43.334403 containerd[1519]: time="2025-09-03T23:26:43.334371662Z" level=error msg="Failed to destroy network for sandbox \"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.336589 containerd[1519]: time="2025-09-03T23:26:43.336551638Z" level=error msg="Failed to destroy network for sandbox \"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.337075 containerd[1519]: time="2025-09-03T23:26:43.337042118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c68b9c757-zqz8d,Uid:39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.337475 kubelet[2618]: E0903 23:26:43.337433 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.337595 kubelet[2618]: E0903 23:26:43.337488 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c68b9c757-zqz8d" Sep 3 23:26:43.337595 kubelet[2618]: E0903 23:26:43.337507 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c68b9c757-zqz8d" Sep 3 23:26:43.337595 kubelet[2618]: E0903 23:26:43.337544 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c68b9c757-zqz8d_calico-system(39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c68b9c757-zqz8d_calico-system(39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"293ba1a81606646af4af2aab7a0fe9402cfd74e73adadde7cdaa10fa85309014\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c68b9c757-zqz8d" podUID="39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5" Sep 3 23:26:43.339021 containerd[1519]: time="2025-09-03T23:26:43.338901627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7tk4d,Uid:71e565b3-c93f-4af8-84a4-0264261cfed3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.339110 kubelet[2618]: E0903 23:26:43.339076 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.339547 kubelet[2618]: E0903 23:26:43.339117 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7tk4d" Sep 3 23:26:43.339547 kubelet[2618]: E0903 23:26:43.339133 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7tk4d" Sep 3 23:26:43.339547 kubelet[2618]: E0903 23:26:43.339231 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7tk4d_kube-system(71e565b3-c93f-4af8-84a4-0264261cfed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7tk4d_kube-system(71e565b3-c93f-4af8-84a4-0264261cfed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a41d034209a860687317e1fbfc822031f510b1c555879fc882083bfb5b5e240\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7tk4d" podUID="71e565b3-c93f-4af8-84a4-0264261cfed3" Sep 3 23:26:43.343751 containerd[1519]: time="2025-09-03T23:26:43.343645770Z" level=error msg="Failed to destroy network for sandbox \"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.344899 containerd[1519]: time="2025-09-03T23:26:43.344831186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-lfnp9,Uid:c33a11cf-fb3f-4fd0-a644-0bde83837f45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.345285 kubelet[2618]: E0903 23:26:43.345238 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.345337 kubelet[2618]: E0903 23:26:43.345296 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" Sep 3 23:26:43.345337 kubelet[2618]: E0903 23:26:43.345313 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" Sep 3 23:26:43.345383 kubelet[2618]: E0903 23:26:43.345348 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d59cd756f-lfnp9_calico-apiserver(c33a11cf-fb3f-4fd0-a644-0bde83837f45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d59cd756f-lfnp9_calico-apiserver(c33a11cf-fb3f-4fd0-a644-0bde83837f45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f12a4cf3ee776ae243f945cd17f10c0f04e7a66d7e157daa15489189516271ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" podUID="c33a11cf-fb3f-4fd0-a644-0bde83837f45" Sep 3 23:26:43.356798 containerd[1519]: time="2025-09-03T23:26:43.356748787Z" level=error msg="Failed to destroy network for sandbox \"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.358130 containerd[1519]: time="2025-09-03T23:26:43.358092655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5664bd6fb6-wx2wj,Uid:315871cc-207c-4bc8-8418-826b1e40bbea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.358351 kubelet[2618]: E0903 23:26:43.358312 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.358394 kubelet[2618]: E0903 23:26:43.358369 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" Sep 3 23:26:43.358394 kubelet[2618]: E0903 23:26:43.358389 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" Sep 3 23:26:43.358456 kubelet[2618]: E0903 23:26:43.358424 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5664bd6fb6-wx2wj_calico-system(315871cc-207c-4bc8-8418-826b1e40bbea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5664bd6fb6-wx2wj_calico-system(315871cc-207c-4bc8-8418-826b1e40bbea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4fab72b76c2e32636522f123f982068a5ff62f0602e185fb12e4fbc94fe6c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" podUID="315871cc-207c-4bc8-8418-826b1e40bbea" Sep 3 23:26:43.361136 containerd[1519]: time="2025-09-03T23:26:43.361099218Z" level=error msg="Failed to destroy network for sandbox \"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.362103 containerd[1519]: time="2025-09-03T23:26:43.362070016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hldrj,Uid:e3f5b663-d94b-4b50-8439-ece6c594b74a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.362325 kubelet[2618]: E0903 23:26:43.362284 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.362385 kubelet[2618]: E0903 23:26:43.362340 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:43.362385 kubelet[2618]: E0903 23:26:43.362358 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hldrj" Sep 3 23:26:43.362431 kubelet[2618]: E0903 23:26:43.362393 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hldrj_calico-system(e3f5b663-d94b-4b50-8439-ece6c594b74a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hldrj_calico-system(e3f5b663-d94b-4b50-8439-ece6c594b74a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a75f330fbd429f34ad836c7d370680b2ba9e6d7ad85c5dc084c03d09fa68c67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hldrj" podUID="e3f5b663-d94b-4b50-8439-ece6c594b74a" Sep 3 23:26:43.362549 containerd[1519]: time="2025-09-03T23:26:43.362520852Z" level=error msg="Failed to destroy network for sandbox \"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.363524 containerd[1519]: time="2025-09-03T23:26:43.363424605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxr7s,Uid:cdcdef1c-95ea-4127-a5a9-56fdc7574efb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.363649 kubelet[2618]: E0903 23:26:43.363617 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:43.363712 kubelet[2618]: E0903 23:26:43.363679 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hxr7s" Sep 3 23:26:43.363712 kubelet[2618]: E0903 23:26:43.363704 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hxr7s" Sep 3 23:26:43.363780 kubelet[2618]: E0903 23:26:43.363748 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hxr7s_kube-system(cdcdef1c-95ea-4127-a5a9-56fdc7574efb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hxr7s_kube-system(cdcdef1c-95ea-4127-a5a9-56fdc7574efb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c29fdb9947830ecc287ce036e464bb6de919f26496bd50b8e91637c06ff2f800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hxr7s" podUID="cdcdef1c-95ea-4127-a5a9-56fdc7574efb" Sep 3 23:26:43.512473 containerd[1519]: time="2025-09-03T23:26:43.512425941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 3 23:26:44.175570 systemd[1]: run-netns-cni\x2d6411a8d7\x2d691b\x2d8202\x2d4a0b\x2dff1e7b27db76.mount: Deactivated successfully. Sep 3 23:26:44.175661 systemd[1]: run-netns-cni\x2d26fc1796\x2d54d8\x2ddda5\x2d747f\x2d59c5307a2034.mount: Deactivated successfully. Sep 3 23:26:44.175708 systemd[1]: run-netns-cni\x2d4c188c56\x2d73ab\x2dd85f\x2db4fd\x2d860f8a7575eb.mount: Deactivated successfully. Sep 3 23:26:44.175750 systemd[1]: run-netns-cni\x2dbe60b073\x2df98d\x2d9ab1\x2de6ac\x2d1df3fb178754.mount: Deactivated successfully. Sep 3 23:26:44.426930 systemd[1]: Created slice kubepods-besteffort-pod18b1e484_37ee_425c_815e_d87a59135b42.slice - libcontainer container kubepods-besteffort-pod18b1e484_37ee_425c_815e_d87a59135b42.slice. Sep 3 23:26:44.430310 containerd[1519]: time="2025-09-03T23:26:44.429682537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6s7q,Uid:18b1e484-37ee-425c-815e-d87a59135b42,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:44.481490 containerd[1519]: time="2025-09-03T23:26:44.481368949Z" level=error msg="Failed to destroy network for sandbox \"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:44.482649 containerd[1519]: time="2025-09-03T23:26:44.482615166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6s7q,Uid:18b1e484-37ee-425c-815e-d87a59135b42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:44.483012 kubelet[2618]: E0903 23:26:44.482963 2618 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 3 23:26:44.483303 systemd[1]: run-netns-cni\x2d8ef425f2\x2dbae7\x2db4f9\x2da7e6\x2d4ff07fdfc6de.mount: Deactivated successfully. Sep 3 23:26:44.483788 kubelet[2618]: E0903 23:26:44.483351 2618 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:44.483788 kubelet[2618]: E0903 23:26:44.483397 2618 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f6s7q" Sep 3 23:26:44.483891 kubelet[2618]: E0903 23:26:44.483450 2618 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f6s7q_calico-system(18b1e484-37ee-425c-815e-d87a59135b42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f6s7q_calico-system(18b1e484-37ee-425c-815e-d87a59135b42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90dfff47631fb27f0bf2ca5291773b3a751daf94260e2fe65dcd23fb962c4fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f6s7q" podUID="18b1e484-37ee-425c-815e-d87a59135b42" Sep 3 23:26:44.962658 kubelet[2618]: I0903 23:26:44.962224 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:26:44.962658 kubelet[2618]: E0903 23:26:44.962582 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:45.514916 kubelet[2618]: E0903 23:26:45.514869 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:47.198781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28802053.mount: Deactivated successfully. Sep 3 23:26:47.428775 containerd[1519]: time="2025-09-03T23:26:47.428206728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:47.437267 containerd[1519]: time="2025-09-03T23:26:47.428518150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 3 23:26:47.437360 containerd[1519]: time="2025-09-03T23:26:47.429529540Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:47.437603 containerd[1519]: time="2025-09-03T23:26:47.431375509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.918900764s" Sep 3 23:26:47.437603 containerd[1519]: time="2025-09-03T23:26:47.437521697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 3 23:26:47.437833 containerd[1519]: time="2025-09-03T23:26:47.437811077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:47.471122 containerd[1519]: time="2025-09-03T23:26:47.471024191Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 3 23:26:47.493331 containerd[1519]: time="2025-09-03T23:26:47.493278261Z" level=info msg="Container c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:47.504933 containerd[1519]: time="2025-09-03T23:26:47.504885949Z" level=info msg="CreateContainer within sandbox \"93dfb61c210ce5166b1aea9cf284aa07b3c3ebfdf75ec7da8366659f0ab80dc2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\"" Sep 3 23:26:47.505528 containerd[1519]: time="2025-09-03T23:26:47.505428987Z" level=info msg="StartContainer for \"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\"" Sep 3 23:26:47.506997 containerd[1519]: time="2025-09-03T23:26:47.506954973Z" level=info msg="connecting to shim c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09" address="unix:///run/containerd/s/ba6f417e62a4113951e50b736d0b72688286b457dcf2b8737c8bded6992360f0" protocol=ttrpc version=3 Sep 3 23:26:47.528303 systemd[1]: Started cri-containerd-c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09.scope - libcontainer container c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09. Sep 3 23:26:47.569189 containerd[1519]: time="2025-09-03T23:26:47.569107502Z" level=info msg="StartContainer for \"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\" returns successfully" Sep 3 23:26:47.685947 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 3 23:26:47.686040 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 3 23:26:47.862835 kubelet[2618]: I0903 23:26:47.862776 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-backend-key-pair\") pod \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " Sep 3 23:26:47.863213 kubelet[2618]: I0903 23:26:47.862916 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsh4\" (UniqueName: \"kubernetes.io/projected/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-kube-api-access-ptsh4\") pod \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " Sep 3 23:26:47.863346 kubelet[2618]: I0903 23:26:47.863288 2618 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-ca-bundle\") pod \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\" (UID: \"39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5\") " Sep 3 23:26:47.864125 kubelet[2618]: I0903 23:26:47.863756 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5" (UID: "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 3 23:26:47.867035 kubelet[2618]: I0903 23:26:47.866999 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-kube-api-access-ptsh4" (OuterVolumeSpecName: "kube-api-access-ptsh4") pod "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5" (UID: "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5"). InnerVolumeSpecName "kube-api-access-ptsh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 3 23:26:47.867112 kubelet[2618]: I0903 23:26:47.867062 2618 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5" (UID: "39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 3 23:26:47.964085 kubelet[2618]: I0903 23:26:47.964036 2618 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptsh4\" (UniqueName: \"kubernetes.io/projected/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-kube-api-access-ptsh4\") on node \"localhost\" DevicePath \"\"" Sep 3 23:26:47.964085 kubelet[2618]: I0903 23:26:47.964070 2618 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 3 23:26:47.964085 kubelet[2618]: I0903 23:26:47.964079 2618 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 3 23:26:48.199534 systemd[1]: var-lib-kubelet-pods-39ec3e7e\x2dce0d\x2d4dbe\x2da09f\x2dc5e0f1a6ecc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dptsh4.mount: Deactivated successfully. Sep 3 23:26:48.199626 systemd[1]: var-lib-kubelet-pods-39ec3e7e\x2dce0d\x2d4dbe\x2da09f\x2dc5e0f1a6ecc5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 3 23:26:48.560057 systemd[1]: Removed slice kubepods-besteffort-pod39ec3e7e_ce0d_4dbe_a09f_c5e0f1a6ecc5.slice - libcontainer container kubepods-besteffort-pod39ec3e7e_ce0d_4dbe_a09f_c5e0f1a6ecc5.slice. Sep 3 23:26:48.564278 kubelet[2618]: I0903 23:26:48.564227 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jb2bd" podStartSLOduration=2.140464149 podStartE2EDuration="14.564209898s" podCreationTimestamp="2025-09-03 23:26:34 +0000 UTC" firstStartedPulling="2025-09-03 23:26:35.037421475 +0000 UTC m=+21.707344262" lastFinishedPulling="2025-09-03 23:26:47.461167224 +0000 UTC m=+34.131090011" observedRunningTime="2025-09-03 23:26:48.56364234 +0000 UTC m=+35.233565127" watchObservedRunningTime="2025-09-03 23:26:48.564209898 +0000 UTC m=+35.234132685" Sep 3 23:26:48.620043 systemd[1]: Created slice kubepods-besteffort-pod9159e83f_508d_4fbf_a64e_fbd28616ae3c.slice - libcontainer container kubepods-besteffort-pod9159e83f_508d_4fbf_a64e_fbd28616ae3c.slice. Sep 3 23:26:48.669014 kubelet[2618]: I0903 23:26:48.668245 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9159e83f-508d-4fbf-a64e-fbd28616ae3c-whisker-backend-key-pair\") pod \"whisker-7bb896d996-j8sxr\" (UID: \"9159e83f-508d-4fbf-a64e-fbd28616ae3c\") " pod="calico-system/whisker-7bb896d996-j8sxr" Sep 3 23:26:48.669014 kubelet[2618]: I0903 23:26:48.668293 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9159e83f-508d-4fbf-a64e-fbd28616ae3c-whisker-ca-bundle\") pod \"whisker-7bb896d996-j8sxr\" (UID: \"9159e83f-508d-4fbf-a64e-fbd28616ae3c\") " pod="calico-system/whisker-7bb896d996-j8sxr" Sep 3 23:26:48.669014 kubelet[2618]: I0903 23:26:48.668314 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7q25\" (UniqueName: \"kubernetes.io/projected/9159e83f-508d-4fbf-a64e-fbd28616ae3c-kube-api-access-f7q25\") pod \"whisker-7bb896d996-j8sxr\" (UID: \"9159e83f-508d-4fbf-a64e-fbd28616ae3c\") " pod="calico-system/whisker-7bb896d996-j8sxr" Sep 3 23:26:48.701599 containerd[1519]: time="2025-09-03T23:26:48.701557824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\" id:\"738abeb559cd3b90d636f1f4f0e386590cfa88d7e8dcd3211033a985b0953315\" pid:3733 exit_status:1 exited_at:{seconds:1756942008 nanos:700946063}" Sep 3 23:26:48.923743 containerd[1519]: time="2025-09-03T23:26:48.923394718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb896d996-j8sxr,Uid:9159e83f-508d-4fbf-a64e-fbd28616ae3c,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:49.123266 systemd-networkd[1430]: cali0cfd937de0c: Link UP Sep 3 23:26:49.125312 systemd-networkd[1430]: cali0cfd937de0c: Gained carrier Sep 3 23:26:49.144068 containerd[1519]: 2025-09-03 23:26:48.942 [INFO][3747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 3 23:26:49.144068 containerd[1519]: 2025-09-03 23:26:48.969 [INFO][3747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7bb896d996--j8sxr-eth0 whisker-7bb896d996- calico-system 9159e83f-508d-4fbf-a64e-fbd28616ae3c 884 0 2025-09-03 23:26:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bb896d996 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7bb896d996-j8sxr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0cfd937de0c [] [] }} ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-" Sep 3 23:26:49.144068 containerd[1519]: 2025-09-03 23:26:48.969 [INFO][3747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144068 containerd[1519]: 2025-09-03 23:26:49.055 [INFO][3762] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" HandleID="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Workload="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.055 [INFO][3762] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" HandleID="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Workload="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011f930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7bb896d996-j8sxr", "timestamp":"2025-09-03 23:26:49.055593539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.055 [INFO][3762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.055 [INFO][3762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.056 [INFO][3762] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.073 [INFO][3762] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" host="localhost" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.086 [INFO][3762] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.091 [INFO][3762] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.093 [INFO][3762] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.095 [INFO][3762] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:49.144503 containerd[1519]: 2025-09-03 23:26:49.096 [INFO][3762] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" host="localhost" Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.098 [INFO][3762] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807 Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.104 [INFO][3762] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" host="localhost" Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.110 [INFO][3762] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" host="localhost" Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.110 [INFO][3762] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" host="localhost" Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.110 [INFO][3762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:49.144702 containerd[1519]: 2025-09-03 23:26:49.110 [INFO][3762] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" HandleID="k8s-pod-network.8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Workload="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144809 containerd[1519]: 2025-09-03 23:26:49.113 [INFO][3747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bb896d996--j8sxr-eth0", GenerateName:"whisker-7bb896d996-", Namespace:"calico-system", SelfLink:"", UID:"9159e83f-508d-4fbf-a64e-fbd28616ae3c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bb896d996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7bb896d996-j8sxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cfd937de0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:49.144809 containerd[1519]: 2025-09-03 23:26:49.113 [INFO][3747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144889 containerd[1519]: 2025-09-03 23:26:49.114 [INFO][3747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cfd937de0c ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144889 containerd[1519]: 2025-09-03 23:26:49.125 [INFO][3747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.144931 containerd[1519]: 2025-09-03 23:26:49.126 [INFO][3747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bb896d996--j8sxr-eth0", GenerateName:"whisker-7bb896d996-", Namespace:"calico-system", SelfLink:"", UID:"9159e83f-508d-4fbf-a64e-fbd28616ae3c", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bb896d996", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807", Pod:"whisker-7bb896d996-j8sxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cfd937de0c", MAC:"46:ee:97:27:cd:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:49.144978 containerd[1519]: 2025-09-03 23:26:49.139 [INFO][3747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" Namespace="calico-system" Pod="whisker-7bb896d996-j8sxr" WorkloadEndpoint="localhost-k8s-whisker--7bb896d996--j8sxr-eth0" Sep 3 23:26:49.297032 containerd[1519]: time="2025-09-03T23:26:49.296983221Z" level=info msg="connecting to shim 8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807" address="unix:///run/containerd/s/b87d216241db39ea0ce75570f9bd8e935208fa2be6e3380c9559a8f0a21e343c" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:49.330319 systemd[1]: Started cri-containerd-8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807.scope - libcontainer container 8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807. Sep 3 23:26:49.344583 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:49.372666 containerd[1519]: time="2025-09-03T23:26:49.372628428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb896d996-j8sxr,Uid:9159e83f-508d-4fbf-a64e-fbd28616ae3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807\"" Sep 3 23:26:49.375068 containerd[1519]: time="2025-09-03T23:26:49.375038505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 3 23:26:49.412422 systemd-networkd[1430]: vxlan.calico: Link UP Sep 3 23:26:49.412432 systemd-networkd[1430]: vxlan.calico: Gained carrier Sep 3 23:26:49.424435 kubelet[2618]: I0903 23:26:49.424405 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5" path="/var/lib/kubelet/pods/39ec3e7e-ce0d-4dbe-a09f-c5e0f1a6ecc5/volumes" Sep 3 23:26:49.646368 containerd[1519]: time="2025-09-03T23:26:49.646037836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\" id:\"9de5321443c23920d74006bb5b3428029a9f0eb0266d56f2d61f5d3f13705290\" pid:4013 exit_status:1 exited_at:{seconds:1756942009 nanos:645754618}" Sep 3 23:26:50.287166 containerd[1519]: time="2025-09-03T23:26:50.286787905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:50.287535 containerd[1519]: time="2025-09-03T23:26:50.287511551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 3 23:26:50.288030 containerd[1519]: time="2025-09-03T23:26:50.288008902Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:50.290286 containerd[1519]: time="2025-09-03T23:26:50.290242123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:50.290796 containerd[1519]: time="2025-09-03T23:26:50.290761676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 915.690128ms" Sep 3 23:26:50.290842 containerd[1519]: time="2025-09-03T23:26:50.290797638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 3 23:26:50.293904 containerd[1519]: time="2025-09-03T23:26:50.293878592Z" level=info msg="CreateContainer within sandbox \"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 3 23:26:50.300425 containerd[1519]: time="2025-09-03T23:26:50.300389403Z" level=info msg="Container 72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:50.302662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539650299.mount: Deactivated successfully. Sep 3 23:26:50.306835 containerd[1519]: time="2025-09-03T23:26:50.306788567Z" level=info msg="CreateContainer within sandbox \"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731\"" Sep 3 23:26:50.307226 containerd[1519]: time="2025-09-03T23:26:50.307201473Z" level=info msg="StartContainer for \"72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731\"" Sep 3 23:26:50.308452 containerd[1519]: time="2025-09-03T23:26:50.308135772Z" level=info msg="connecting to shim 72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731" address="unix:///run/containerd/s/b87d216241db39ea0ce75570f9bd8e935208fa2be6e3380c9559a8f0a21e343c" protocol=ttrpc version=3 Sep 3 23:26:50.334307 systemd[1]: Started cri-containerd-72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731.scope - libcontainer container 72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731. Sep 3 23:26:50.368863 containerd[1519]: time="2025-09-03T23:26:50.368821560Z" level=info msg="StartContainer for \"72efc9412254cf8a52c122df0d317d28cb8447c8f1b9357ce0c699e6af84a731\" returns successfully" Sep 3 23:26:50.370381 containerd[1519]: time="2025-09-03T23:26:50.370350217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 3 23:26:50.725302 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Sep 3 23:26:50.917327 systemd-networkd[1430]: cali0cfd937de0c: Gained IPv6LL Sep 3 23:26:51.732753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862610971.mount: Deactivated successfully. Sep 3 23:26:51.764886 containerd[1519]: time="2025-09-03T23:26:51.764390931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:51.770432 containerd[1519]: time="2025-09-03T23:26:51.770398898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 3 23:26:51.771221 containerd[1519]: time="2025-09-03T23:26:51.771185787Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:51.773215 containerd[1519]: time="2025-09-03T23:26:51.773179149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:51.785209 containerd[1519]: time="2025-09-03T23:26:51.785164482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.414759382s" Sep 3 23:26:51.785209 containerd[1519]: time="2025-09-03T23:26:51.785202444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 3 23:26:51.788024 containerd[1519]: time="2025-09-03T23:26:51.787980334Z" level=info msg="CreateContainer within sandbox \"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 3 23:26:51.795166 containerd[1519]: time="2025-09-03T23:26:51.794733307Z" level=info msg="Container 6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:51.802107 containerd[1519]: time="2025-09-03T23:26:51.802032793Z" level=info msg="CreateContainer within sandbox \"8163470eb3e2dcdc0119a9d1908517d3bac90796a8a6aa35308b226eac145807\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e\"" Sep 3 23:26:51.802744 containerd[1519]: time="2025-09-03T23:26:51.802715275Z" level=info msg="StartContainer for \"6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e\"" Sep 3 23:26:51.803761 containerd[1519]: time="2025-09-03T23:26:51.803735337Z" level=info msg="connecting to shim 6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e" address="unix:///run/containerd/s/b87d216241db39ea0ce75570f9bd8e935208fa2be6e3380c9559a8f0a21e343c" protocol=ttrpc version=3 Sep 3 23:26:51.831291 systemd[1]: Started cri-containerd-6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e.scope - libcontainer container 6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e. Sep 3 23:26:51.861120 containerd[1519]: time="2025-09-03T23:26:51.861085845Z" level=info msg="StartContainer for \"6323cd4e6309eed7c93d4d431851f17f8f0e00b73c687686d3b1edb8412e7a1e\" returns successfully" Sep 3 23:26:52.570795 kubelet[2618]: I0903 23:26:52.570715 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bb896d996-j8sxr" podStartSLOduration=2.159491655 podStartE2EDuration="4.57069994s" podCreationTimestamp="2025-09-03 23:26:48 +0000 UTC" firstStartedPulling="2025-09-03 23:26:49.37465872 +0000 UTC m=+36.044581467" lastFinishedPulling="2025-09-03 23:26:51.785867005 +0000 UTC m=+38.455789752" observedRunningTime="2025-09-03 23:26:52.56985485 +0000 UTC m=+39.239777637" watchObservedRunningTime="2025-09-03 23:26:52.57069994 +0000 UTC m=+39.240622727" Sep 3 23:26:54.421996 containerd[1519]: time="2025-09-03T23:26:54.421947206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-lfnp9,Uid:c33a11cf-fb3f-4fd0-a644-0bde83837f45,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:26:54.521330 systemd-networkd[1430]: calica705478652: Link UP Sep 3 23:26:54.521986 systemd-networkd[1430]: calica705478652: Gained carrier Sep 3 23:26:54.534889 containerd[1519]: 2025-09-03 23:26:54.460 [INFO][4137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0 calico-apiserver-6d59cd756f- calico-apiserver c33a11cf-fb3f-4fd0-a644-0bde83837f45 814 0 2025-09-03 23:26:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d59cd756f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d59cd756f-lfnp9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica705478652 [] [] }} ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-" Sep 3 23:26:54.534889 containerd[1519]: 2025-09-03 23:26:54.460 [INFO][4137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.534889 containerd[1519]: 2025-09-03 23:26:54.485 [INFO][4152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" HandleID="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Workload="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.485 [INFO][4152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" HandleID="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Workload="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d59cd756f-lfnp9", "timestamp":"2025-09-03 23:26:54.485618017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.485 [INFO][4152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.485 [INFO][4152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.485 [INFO][4152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.494 [INFO][4152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" host="localhost" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.498 [INFO][4152] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.502 [INFO][4152] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.504 [INFO][4152] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.506 [INFO][4152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:54.535089 containerd[1519]: 2025-09-03 23:26:54.506 [INFO][4152] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" host="localhost" Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.507 [INFO][4152] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402 Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.511 [INFO][4152] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" host="localhost" Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.516 [INFO][4152] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" host="localhost" Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.517 [INFO][4152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" host="localhost" Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.517 [INFO][4152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:54.535319 containerd[1519]: 2025-09-03 23:26:54.517 [INFO][4152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" HandleID="k8s-pod-network.2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Workload="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.535432 containerd[1519]: 2025-09-03 23:26:54.519 [INFO][4137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0", GenerateName:"calico-apiserver-6d59cd756f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c33a11cf-fb3f-4fd0-a644-0bde83837f45", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d59cd756f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d59cd756f-lfnp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica705478652", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:54.535499 containerd[1519]: 2025-09-03 23:26:54.519 [INFO][4137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.535499 containerd[1519]: 2025-09-03 23:26:54.519 [INFO][4137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica705478652 ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.535499 containerd[1519]: 2025-09-03 23:26:54.522 [INFO][4137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.535564 containerd[1519]: 2025-09-03 23:26:54.523 [INFO][4137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0", GenerateName:"calico-apiserver-6d59cd756f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c33a11cf-fb3f-4fd0-a644-0bde83837f45", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d59cd756f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402", Pod:"calico-apiserver-6d59cd756f-lfnp9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica705478652", MAC:"32:0d:bb:86:1d:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:54.535608 containerd[1519]: 2025-09-03 23:26:54.531 [INFO][4137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-lfnp9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--lfnp9-eth0" Sep 3 23:26:54.612615 containerd[1519]: time="2025-09-03T23:26:54.612568738Z" level=info msg="connecting to shim 2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402" address="unix:///run/containerd/s/6c96f1b2f815dbaa86ced527684cdc94bbba183f72a1f9f6202742be4dac8256" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:54.637390 systemd[1]: Started cri-containerd-2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402.scope - libcontainer container 2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402. Sep 3 23:26:54.649833 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:54.673807 containerd[1519]: time="2025-09-03T23:26:54.673137695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-lfnp9,Uid:c33a11cf-fb3f-4fd0-a644-0bde83837f45,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402\"" Sep 3 23:26:54.675356 containerd[1519]: time="2025-09-03T23:26:54.675325978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 3 23:26:55.422753 containerd[1519]: time="2025-09-03T23:26:55.422698950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-wbr7d,Uid:2edcb6a9-5ded-4011-9542-41e5097f9c68,Namespace:calico-apiserver,Attempt:0,}" Sep 3 23:26:55.423366 containerd[1519]: time="2025-09-03T23:26:55.422985966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6s7q,Uid:18b1e484-37ee-425c-815e-d87a59135b42,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:55.423366 containerd[1519]: time="2025-09-03T23:26:55.422988846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5664bd6fb6-wx2wj,Uid:315871cc-207c-4bc8-8418-826b1e40bbea,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:55.595004 systemd-networkd[1430]: calif462388e3ec: Link UP Sep 3 23:26:55.595871 systemd-networkd[1430]: calif462388e3ec: Gained carrier Sep 3 23:26:55.616683 containerd[1519]: 2025-09-03 23:26:55.485 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0 calico-apiserver-6d59cd756f- calico-apiserver 2edcb6a9-5ded-4011-9542-41e5097f9c68 811 0 2025-09-03 23:26:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d59cd756f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d59cd756f-wbr7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif462388e3ec [] [] }} ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-" Sep 3 23:26:55.616683 containerd[1519]: 2025-09-03 23:26:55.486 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.616683 containerd[1519]: 2025-09-03 23:26:55.535 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" HandleID="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Workload="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.536 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" HandleID="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Workload="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d59cd756f-wbr7d", "timestamp":"2025-09-03 23:26:55.53588041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.536 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.536 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.536 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.546 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" host="localhost" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.550 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.559 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.561 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.563 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.616982 containerd[1519]: 2025-09-03 23:26:55.563 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" host="localhost" Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.564 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.575 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" host="localhost" Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.586 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" host="localhost" Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.587 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" host="localhost" Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.587 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:55.617274 containerd[1519]: 2025-09-03 23:26:55.587 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" HandleID="k8s-pod-network.3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Workload="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.617446 containerd[1519]: 2025-09-03 23:26:55.591 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0", GenerateName:"calico-apiserver-6d59cd756f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2edcb6a9-5ded-4011-9542-41e5097f9c68", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d59cd756f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d59cd756f-wbr7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif462388e3ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.617527 containerd[1519]: 2025-09-03 23:26:55.592 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.617527 containerd[1519]: 2025-09-03 23:26:55.592 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif462388e3ec ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.617527 containerd[1519]: 2025-09-03 23:26:55.596 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.617644 containerd[1519]: 2025-09-03 23:26:55.596 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0", GenerateName:"calico-apiserver-6d59cd756f-", Namespace:"calico-apiserver", SelfLink:"", UID:"2edcb6a9-5ded-4011-9542-41e5097f9c68", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d59cd756f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd", Pod:"calico-apiserver-6d59cd756f-wbr7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif462388e3ec", MAC:"4a:57:6d:87:07:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.617717 containerd[1519]: 2025-09-03 23:26:55.612 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" Namespace="calico-apiserver" Pod="calico-apiserver-6d59cd756f-wbr7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d59cd756f--wbr7d-eth0" Sep 3 23:26:55.662659 containerd[1519]: time="2025-09-03T23:26:55.662619651Z" level=info msg="connecting to shim 3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd" address="unix:///run/containerd/s/53e4244b83d982923a08508bf19bf576b3d227fe3a93b1227d34f2f7778a29e6" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:55.697377 systemd[1]: Started cri-containerd-3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd.scope - libcontainer container 3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd. Sep 3 23:26:55.708559 systemd-networkd[1430]: calib064777790f: Link UP Sep 3 23:26:55.709326 systemd-networkd[1430]: calib064777790f: Gained carrier Sep 3 23:26:55.722543 containerd[1519]: 2025-09-03 23:26:55.492 [INFO][4245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0 calico-kube-controllers-5664bd6fb6- calico-system 315871cc-207c-4bc8-8418-826b1e40bbea 809 0 2025-09-03 23:26:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5664bd6fb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5664bd6fb6-wx2wj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib064777790f [] [] }} ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-" Sep 3 23:26:55.722543 containerd[1519]: 2025-09-03 23:26:55.493 [INFO][4245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.722543 containerd[1519]: 2025-09-03 23:26:55.546 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" HandleID="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Workload="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.546 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" HandleID="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Workload="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5664bd6fb6-wx2wj", "timestamp":"2025-09-03 23:26:55.546253937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.546 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.587 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.587 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.650 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" host="localhost" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.659 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.667 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.671 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.674 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.722730 containerd[1519]: 2025-09-03 23:26:55.674 [INFO][4275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" host="localhost" Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.676 [INFO][4275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6 Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.685 [INFO][4275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" host="localhost" Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.694 [INFO][4275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" host="localhost" Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.694 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" host="localhost" Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.694 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:55.722971 containerd[1519]: 2025-09-03 23:26:55.694 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" HandleID="k8s-pod-network.da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Workload="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.723098 containerd[1519]: 2025-09-03 23:26:55.704 [INFO][4245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0", GenerateName:"calico-kube-controllers-5664bd6fb6-", Namespace:"calico-system", SelfLink:"", UID:"315871cc-207c-4bc8-8418-826b1e40bbea", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5664bd6fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5664bd6fb6-wx2wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib064777790f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.723164 containerd[1519]: 2025-09-03 23:26:55.706 [INFO][4245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.723164 containerd[1519]: 2025-09-03 23:26:55.706 [INFO][4245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib064777790f ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.723164 containerd[1519]: 2025-09-03 23:26:55.709 [INFO][4245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.723235 containerd[1519]: 2025-09-03 23:26:55.710 [INFO][4245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0", GenerateName:"calico-kube-controllers-5664bd6fb6-", Namespace:"calico-system", SelfLink:"", UID:"315871cc-207c-4bc8-8418-826b1e40bbea", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5664bd6fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6", Pod:"calico-kube-controllers-5664bd6fb6-wx2wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib064777790f", MAC:"ee:ad:85:16:78:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.723290 containerd[1519]: 2025-09-03 23:26:55.719 [INFO][4245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" Namespace="calico-system" Pod="calico-kube-controllers-5664bd6fb6-wx2wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5664bd6fb6--wx2wj-eth0" Sep 3 23:26:55.753697 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:55.755620 containerd[1519]: time="2025-09-03T23:26:55.755549725Z" level=info msg="connecting to shim da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6" address="unix:///run/containerd/s/8887052878cf07cd5097a129dec233fa036fab68511af643aded3921922ab4a7" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:55.816396 systemd[1]: Started cri-containerd-da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6.scope - libcontainer container da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6. Sep 3 23:26:55.842771 systemd-networkd[1430]: cali0ca1bfd45b3: Link UP Sep 3 23:26:55.843819 systemd-networkd[1430]: cali0ca1bfd45b3: Gained carrier Sep 3 23:26:55.846262 systemd-networkd[1430]: calica705478652: Gained IPv6LL Sep 3 23:26:55.865512 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:55.866952 containerd[1519]: 2025-09-03 23:26:55.486 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--f6s7q-eth0 csi-node-driver- calico-system 18b1e484-37ee-425c-815e-d87a59135b42 684 0 2025-09-03 23:26:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-f6s7q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ca1bfd45b3 [] [] }} ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-" Sep 3 23:26:55.866952 containerd[1519]: 2025-09-03 23:26:55.486 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.866952 containerd[1519]: 2025-09-03 23:26:55.550 [INFO][4274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" HandleID="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Workload="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.550 [INFO][4274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" HandleID="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Workload="localhost-k8s-csi--node--driver--f6s7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004de40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-f6s7q", "timestamp":"2025-09-03 23:26:55.55033864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.550 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.694 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.695 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.750 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" host="localhost" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.779 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.797 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.819 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.822 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:55.867244 containerd[1519]: 2025-09-03 23:26:55.822 [INFO][4274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" host="localhost" Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.825 [INFO][4274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713 Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.829 [INFO][4274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" host="localhost" Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.836 [INFO][4274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" host="localhost" Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.836 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" host="localhost" Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.836 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:55.867667 containerd[1519]: 2025-09-03 23:26:55.836 [INFO][4274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" HandleID="k8s-pod-network.b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Workload="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.867909 containerd[1519]: 2025-09-03 23:26:55.841 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f6s7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18b1e484-37ee-425c-815e-d87a59135b42", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-f6s7q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ca1bfd45b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.867985 containerd[1519]: 2025-09-03 23:26:55.841 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.867985 containerd[1519]: 2025-09-03 23:26:55.841 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ca1bfd45b3 ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.867985 containerd[1519]: 2025-09-03 23:26:55.844 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.868046 containerd[1519]: 2025-09-03 23:26:55.844 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--f6s7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18b1e484-37ee-425c-815e-d87a59135b42", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713", Pod:"csi-node-driver-f6s7q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ca1bfd45b3", MAC:"be:be:8d:e5:5f:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:55.868223 containerd[1519]: 2025-09-03 23:26:55.859 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" Namespace="calico-system" Pod="csi-node-driver-f6s7q" WorkloadEndpoint="localhost-k8s-csi--node--driver--f6s7q-eth0" Sep 3 23:26:55.906449 containerd[1519]: time="2025-09-03T23:26:55.906408123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d59cd756f-wbr7d,Uid:2edcb6a9-5ded-4011-9542-41e5097f9c68,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd\"" Sep 3 23:26:55.916918 containerd[1519]: time="2025-09-03T23:26:55.916877094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5664bd6fb6-wx2wj,Uid:315871cc-207c-4bc8-8418-826b1e40bbea,Namespace:calico-system,Attempt:0,} returns sandbox id \"da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6\"" Sep 3 23:26:55.941222 containerd[1519]: time="2025-09-03T23:26:55.941180381Z" level=info msg="connecting to shim b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713" address="unix:///run/containerd/s/b9b7ff49ff1a961198e1e113fd51cd1d768bf9e23196c5c3f90eac76c19dae01" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:55.966320 systemd[1]: Started cri-containerd-b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713.scope - libcontainer container b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713. Sep 3 23:26:55.979364 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:55.994583 containerd[1519]: time="2025-09-03T23:26:55.994476011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f6s7q,Uid:18b1e484-37ee-425c-815e-d87a59135b42,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713\"" Sep 3 23:26:56.254486 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:57864.service - OpenSSH per-connection server daemon (10.0.0.1:57864). Sep 3 23:26:56.313332 containerd[1519]: time="2025-09-03T23:26:56.313293946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:56.313780 containerd[1519]: time="2025-09-03T23:26:56.313749530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 3 23:26:56.314543 containerd[1519]: time="2025-09-03T23:26:56.314520771Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:56.316368 containerd[1519]: time="2025-09-03T23:26:56.316334067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:56.316951 containerd[1519]: time="2025-09-03T23:26:56.316925819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.641564519s" Sep 3 23:26:56.317020 containerd[1519]: time="2025-09-03T23:26:56.316954980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 3 23:26:56.317913 containerd[1519]: time="2025-09-03T23:26:56.317891670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 3 23:26:56.321979 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 57864 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:26:56.324011 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:26:56.325614 containerd[1519]: time="2025-09-03T23:26:56.325581519Z" level=info msg="CreateContainer within sandbox \"2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:26:56.328340 systemd-logind[1491]: New session 8 of user core. Sep 3 23:26:56.331280 containerd[1519]: time="2025-09-03T23:26:56.331240221Z" level=info msg="Container a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:56.337688 containerd[1519]: time="2025-09-03T23:26:56.337572958Z" level=info msg="CreateContainer within sandbox \"2e2516ad082bb6eed8cd38d7a9d9f7385861d96e91130cec33b2727251eb6402\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35\"" Sep 3 23:26:56.338033 containerd[1519]: time="2025-09-03T23:26:56.337999900Z" level=info msg="StartContainer for \"a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35\"" Sep 3 23:26:56.338362 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 3 23:26:56.339023 containerd[1519]: time="2025-09-03T23:26:56.338998793Z" level=info msg="connecting to shim a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35" address="unix:///run/containerd/s/6c96f1b2f815dbaa86ced527684cdc94bbba183f72a1f9f6202742be4dac8256" protocol=ttrpc version=3 Sep 3 23:26:56.361310 systemd[1]: Started cri-containerd-a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35.scope - libcontainer container a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35. Sep 3 23:26:56.401025 containerd[1519]: time="2025-09-03T23:26:56.400991612Z" level=info msg="StartContainer for \"a91e43a381dbab70844fa28a0981184aa6df7d099bc0d3ac5455df0f74e42e35\" returns successfully" Sep 3 23:26:56.422476 kubelet[2618]: E0903 23:26:56.422422 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:56.423388 kubelet[2618]: E0903 23:26:56.422517 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:56.423555 containerd[1519]: time="2025-09-03T23:26:56.423405765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxr7s,Uid:cdcdef1c-95ea-4127-a5a9-56fdc7574efb,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:56.423555 containerd[1519]: time="2025-09-03T23:26:56.423461008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7tk4d,Uid:71e565b3-c93f-4af8-84a4-0264261cfed3,Namespace:kube-system,Attempt:0,}" Sep 3 23:26:56.581069 containerd[1519]: time="2025-09-03T23:26:56.581022632Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:56.588168 kubelet[2618]: I0903 23:26:56.587949 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d59cd756f-lfnp9" podStartSLOduration=24.94488604 podStartE2EDuration="26.58793068s" podCreationTimestamp="2025-09-03 23:26:30 +0000 UTC" firstStartedPulling="2025-09-03 23:26:54.674727824 +0000 UTC m=+41.344650571" lastFinishedPulling="2025-09-03 23:26:56.317772384 +0000 UTC m=+42.987695211" observedRunningTime="2025-09-03 23:26:56.587286325 +0000 UTC m=+43.257209112" watchObservedRunningTime="2025-09-03 23:26:56.58793068 +0000 UTC m=+43.257853467" Sep 3 23:26:56.602168 containerd[1519]: time="2025-09-03T23:26:56.602095233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 3 23:26:56.605624 containerd[1519]: time="2025-09-03T23:26:56.605590499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 287.590543ms" Sep 3 23:26:56.605624 containerd[1519]: time="2025-09-03T23:26:56.605624661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 3 23:26:56.608957 containerd[1519]: time="2025-09-03T23:26:56.608933637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 3 23:26:56.609706 containerd[1519]: time="2025-09-03T23:26:56.609657836Z" level=info msg="CreateContainer within sandbox \"3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 3 23:26:56.627453 systemd-networkd[1430]: cali46b86e1fec9: Link UP Sep 3 23:26:56.628234 systemd-networkd[1430]: cali46b86e1fec9: Gained carrier Sep 3 23:26:56.631617 containerd[1519]: time="2025-09-03T23:26:56.631576442Z" level=info msg="Container 971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:56.651459 containerd[1519]: 2025-09-03 23:26:56.497 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0 coredns-668d6bf9bc- kube-system cdcdef1c-95ea-4127-a5a9-56fdc7574efb 813 0 2025-09-03 23:26:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hxr7s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46b86e1fec9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-" Sep 3 23:26:56.651459 containerd[1519]: 2025-09-03 23:26:56.497 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.651459 containerd[1519]: 2025-09-03 23:26:56.543 [INFO][4545] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" HandleID="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Workload="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.543 [INFO][4545] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" HandleID="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Workload="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hxr7s", "timestamp":"2025-09-03 23:26:56.543177898 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.543 [INFO][4545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.543 [INFO][4545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.543 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.562 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" host="localhost" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.571 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.578 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.584 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.592 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:56.652411 containerd[1519]: 2025-09-03 23:26:56.592 [INFO][4545] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" host="localhost" Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.595 [INFO][4545] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.601 [INFO][4545] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" host="localhost" Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.613 [INFO][4545] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" host="localhost" Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.613 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" host="localhost" Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.614 [INFO][4545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:56.652620 containerd[1519]: 2025-09-03 23:26:56.614 [INFO][4545] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" HandleID="k8s-pod-network.b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Workload="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.653037 containerd[1519]: 2025-09-03 23:26:56.621 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cdcdef1c-95ea-4127-a5a9-56fdc7574efb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hxr7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b86e1fec9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:56.653125 containerd[1519]: 2025-09-03 23:26:56.621 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.653125 containerd[1519]: 2025-09-03 23:26:56.621 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46b86e1fec9 ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.653125 containerd[1519]: 2025-09-03 23:26:56.628 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.653312 containerd[1519]: 2025-09-03 23:26:56.633 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cdcdef1c-95ea-4127-a5a9-56fdc7574efb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da", Pod:"coredns-668d6bf9bc-hxr7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46b86e1fec9", MAC:"a6:ef:e4:7a:b9:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:56.653312 containerd[1519]: 2025-09-03 23:26:56.643 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" Namespace="kube-system" Pod="coredns-668d6bf9bc-hxr7s" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hxr7s-eth0" Sep 3 23:26:56.653312 containerd[1519]: time="2025-09-03T23:26:56.652088293Z" level=info msg="CreateContainer within sandbox \"3167192c7e94b3274a9543550ec4621134d9f4302c84414afe71ffabb638b3dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741\"" Sep 3 23:26:56.654571 containerd[1519]: time="2025-09-03T23:26:56.653645816Z" level=info msg="StartContainer for \"971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741\"" Sep 3 23:26:56.654952 containerd[1519]: time="2025-09-03T23:26:56.654924564Z" level=info msg="connecting to shim 971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741" address="unix:///run/containerd/s/53e4244b83d982923a08508bf19bf576b3d227fe3a93b1227d34f2f7778a29e6" protocol=ttrpc version=3 Sep 3 23:26:56.690532 sshd[4472]: Connection closed by 10.0.0.1 port 57864 Sep 3 23:26:56.690242 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Sep 3 23:26:56.691307 systemd[1]: Started cri-containerd-971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741.scope - libcontainer container 971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741. Sep 3 23:26:56.695034 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:57864.service: Deactivated successfully. Sep 3 23:26:56.698514 systemd[1]: session-8.scope: Deactivated successfully. Sep 3 23:26:56.699848 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Sep 3 23:26:56.702094 systemd-logind[1491]: Removed session 8. Sep 3 23:26:56.707338 containerd[1519]: time="2025-09-03T23:26:56.707270830Z" level=info msg="connecting to shim b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da" address="unix:///run/containerd/s/a44f7e58194e16716a22eeb50a3ecf8bc430511d1667655a189ed6736c817f33" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:56.714411 systemd-networkd[1430]: calid9da233c8d8: Link UP Sep 3 23:26:56.715160 systemd-networkd[1430]: calid9da233c8d8: Gained carrier Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.502 [INFO][4518] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0 coredns-668d6bf9bc- kube-system 71e565b3-c93f-4af8-84a4-0264261cfed3 805 0 2025-09-03 23:26:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7tk4d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9da233c8d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.503 [INFO][4518] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.578 [INFO][4552] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" HandleID="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Workload="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.578 [INFO][4552] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" HandleID="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Workload="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7tk4d", "timestamp":"2025-09-03 23:26:56.578234724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.578 [INFO][4552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.614 [INFO][4552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.614 [INFO][4552] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.663 [INFO][4552] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.674 [INFO][4552] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.683 [INFO][4552] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.686 [INFO][4552] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.689 [INFO][4552] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.689 [INFO][4552] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.693 [INFO][4552] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2 Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.700 [INFO][4552] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.706 [INFO][4552] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.707 [INFO][4552] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" host="localhost" Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.707 [INFO][4552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:56.732678 containerd[1519]: 2025-09-03 23:26:56.707 [INFO][4552] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" HandleID="k8s-pod-network.0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Workload="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.711 [INFO][4518] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71e565b3-c93f-4af8-84a4-0264261cfed3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7tk4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9da233c8d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.711 [INFO][4518] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.711 [INFO][4518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9da233c8d8 ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.715 [INFO][4518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.716 [INFO][4518] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71e565b3-c93f-4af8-84a4-0264261cfed3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2", Pod:"coredns-668d6bf9bc-7tk4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9da233c8d8", MAC:"7e:c9:1e:e4:05:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:56.733447 containerd[1519]: 2025-09-03 23:26:56.727 [INFO][4518] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" Namespace="kube-system" Pod="coredns-668d6bf9bc-7tk4d" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7tk4d-eth0" Sep 3 23:26:56.739360 systemd[1]: Started cri-containerd-b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da.scope - libcontainer container b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da. Sep 3 23:26:56.756849 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:56.766338 containerd[1519]: time="2025-09-03T23:26:56.766297211Z" level=info msg="connecting to shim 0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2" address="unix:///run/containerd/s/d3301c3d3773f70dc6a736a46b9aa4f072e7d9c83ea2e148b2ecaf9aaf04b643" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:56.777078 containerd[1519]: time="2025-09-03T23:26:56.777014461Z" level=info msg="StartContainer for \"971df075879c7e535071a25760c7d0eb073f0c1737600d43a349ffd7b1767741\" returns successfully" Sep 3 23:26:56.792857 containerd[1519]: time="2025-09-03T23:26:56.792812782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxr7s,Uid:cdcdef1c-95ea-4127-a5a9-56fdc7574efb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da\"" Sep 3 23:26:56.794557 kubelet[2618]: E0903 23:26:56.794534 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:56.798955 containerd[1519]: time="2025-09-03T23:26:56.798894625Z" level=info msg="CreateContainer within sandbox \"b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:26:56.810947 systemd[1]: Started cri-containerd-0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2.scope - libcontainer container 0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2. Sep 3 23:26:56.827634 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:56.834443 containerd[1519]: time="2025-09-03T23:26:56.834332071Z" level=info msg="Container 0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:56.850940 containerd[1519]: time="2025-09-03T23:26:56.850888352Z" level=info msg="CreateContainer within sandbox \"b49b8ddcb6ebe7b3ef2c5c93fe01106aea0d0e1addc28a27f3e747475ffca5da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97\"" Sep 3 23:26:56.851970 containerd[1519]: time="2025-09-03T23:26:56.851784000Z" level=info msg="StartContainer for \"0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97\"" Sep 3 23:26:56.854871 containerd[1519]: time="2025-09-03T23:26:56.854810041Z" level=info msg="connecting to shim 0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97" address="unix:///run/containerd/s/a44f7e58194e16716a22eeb50a3ecf8bc430511d1667655a189ed6736c817f33" protocol=ttrpc version=3 Sep 3 23:26:56.869265 systemd-networkd[1430]: calib064777790f: Gained IPv6LL Sep 3 23:26:56.882337 systemd[1]: Started cri-containerd-0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97.scope - libcontainer container 0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97. Sep 3 23:26:56.889886 containerd[1519]: time="2025-09-03T23:26:56.889848785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7tk4d,Uid:71e565b3-c93f-4af8-84a4-0264261cfed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2\"" Sep 3 23:26:56.891370 kubelet[2618]: E0903 23:26:56.890874 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:56.894159 containerd[1519]: time="2025-09-03T23:26:56.894109652Z" level=info msg="CreateContainer within sandbox \"0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 3 23:26:56.905231 containerd[1519]: time="2025-09-03T23:26:56.905193522Z" level=info msg="Container 0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:56.911532 containerd[1519]: time="2025-09-03T23:26:56.911490977Z" level=info msg="CreateContainer within sandbox \"0365e8cfae8a72871574f6a7b037984bcf2134105bb766bc753180b2144768b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2\"" Sep 3 23:26:56.912248 containerd[1519]: time="2025-09-03T23:26:56.912223456Z" level=info msg="StartContainer for \"0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2\"" Sep 3 23:26:56.913917 containerd[1519]: time="2025-09-03T23:26:56.913199468Z" level=info msg="connecting to shim 0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2" address="unix:///run/containerd/s/d3301c3d3773f70dc6a736a46b9aa4f072e7d9c83ea2e148b2ecaf9aaf04b643" protocol=ttrpc version=3 Sep 3 23:26:56.917305 containerd[1519]: time="2025-09-03T23:26:56.917267524Z" level=info msg="StartContainer for \"0d1e2573c7c43b6dc0af75a4c9c27eb45074f6d24a4567e52f4546aa99c96f97\" returns successfully" Sep 3 23:26:56.948372 systemd[1]: Started cri-containerd-0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2.scope - libcontainer container 0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2. Sep 3 23:26:57.012372 containerd[1519]: time="2025-09-03T23:26:57.012301687Z" level=info msg="StartContainer for \"0780449e57d0d7a4b7fbbe83f7da17b36b2f9d06274d9220c5e98835fa40d4e2\" returns successfully" Sep 3 23:26:57.125252 systemd-networkd[1430]: cali0ca1bfd45b3: Gained IPv6LL Sep 3 23:26:57.317313 systemd-networkd[1430]: calif462388e3ec: Gained IPv6LL Sep 3 23:26:57.422821 containerd[1519]: time="2025-09-03T23:26:57.422709510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hldrj,Uid:e3f5b663-d94b-4b50-8439-ece6c594b74a,Namespace:calico-system,Attempt:0,}" Sep 3 23:26:57.624172 kubelet[2618]: E0903 23:26:57.623856 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:57.627681 kubelet[2618]: E0903 23:26:57.627468 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:57.640592 kubelet[2618]: I0903 23:26:57.640451 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d59cd756f-wbr7d" podStartSLOduration=26.941109229 podStartE2EDuration="27.640436091s" podCreationTimestamp="2025-09-03 23:26:30 +0000 UTC" firstStartedPulling="2025-09-03 23:26:55.908170939 +0000 UTC m=+42.578093726" lastFinishedPulling="2025-09-03 23:26:56.607497801 +0000 UTC m=+43.277420588" observedRunningTime="2025-09-03 23:26:57.640313725 +0000 UTC m=+44.310236472" watchObservedRunningTime="2025-09-03 23:26:57.640436091 +0000 UTC m=+44.310358878" Sep 3 23:26:57.661621 kubelet[2618]: I0903 23:26:57.661559 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hxr7s" podStartSLOduration=38.661539227 podStartE2EDuration="38.661539227s" podCreationTimestamp="2025-09-03 23:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:57.661265212 +0000 UTC m=+44.331187999" watchObservedRunningTime="2025-09-03 23:26:57.661539227 +0000 UTC m=+44.331462014" Sep 3 23:26:57.699912 systemd-networkd[1430]: calie11363ba502: Link UP Sep 3 23:26:57.702131 systemd-networkd[1430]: calie11363ba502: Gained carrier Sep 3 23:26:57.724568 kubelet[2618]: I0903 23:26:57.724507 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7tk4d" podStartSLOduration=38.724486014 podStartE2EDuration="38.724486014s" podCreationTimestamp="2025-09-03 23:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-03 23:26:57.68376298 +0000 UTC m=+44.353685767" watchObservedRunningTime="2025-09-03 23:26:57.724486014 +0000 UTC m=+44.394408801" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.517 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--hldrj-eth0 goldmane-54d579b49d- calico-system e3f5b663-d94b-4b50-8439-ece6c594b74a 815 0 2025-09-03 23:26:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-hldrj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie11363ba502 [] [] }} ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.517 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.588 [INFO][4805] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" HandleID="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Workload="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.588 [INFO][4805] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" HandleID="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Workload="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-hldrj", "timestamp":"2025-09-03 23:26:57.588167138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.588 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.588 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.588 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.612 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.637 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.655 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.659 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.666 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.666 [INFO][4805] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.669 [INFO][4805] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.675 [INFO][4805] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.686 [INFO][4805] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.686 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" host="localhost" Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.686 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 3 23:26:57.727656 containerd[1519]: 2025-09-03 23:26:57.686 [INFO][4805] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" HandleID="k8s-pod-network.c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Workload="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.696 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hldrj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e3f5b663-d94b-4b50-8439-ece6c594b74a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-hldrj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie11363ba502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.696 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.696 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie11363ba502 ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.704 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.704 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--hldrj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e3f5b663-d94b-4b50-8439-ece6c594b74a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 3, 23, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac", Pod:"goldmane-54d579b49d-hldrj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie11363ba502", MAC:"3e:31:33:e4:73:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 3 23:26:57.728901 containerd[1519]: 2025-09-03 23:26:57.724 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" Namespace="calico-system" Pod="goldmane-54d579b49d-hldrj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--hldrj-eth0" Sep 3 23:26:57.809419 containerd[1519]: time="2025-09-03T23:26:57.809373260Z" level=info msg="connecting to shim c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac" address="unix:///run/containerd/s/78be86755daebccb6235962c20cb3d01f7d82d0e7c34718fc1c765f787802f52" namespace=k8s.io protocol=ttrpc version=3 Sep 3 23:26:57.846373 systemd[1]: Started cri-containerd-c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac.scope - libcontainer container c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac. Sep 3 23:26:57.859109 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 3 23:26:57.883127 containerd[1519]: time="2025-09-03T23:26:57.883064525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hldrj,Uid:e3f5b663-d94b-4b50-8439-ece6c594b74a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac\"" Sep 3 23:26:58.149285 systemd-networkd[1430]: cali46b86e1fec9: Gained IPv6LL Sep 3 23:26:58.444459 containerd[1519]: time="2025-09-03T23:26:58.444346718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:58.445446 containerd[1519]: time="2025-09-03T23:26:58.445410612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 3 23:26:58.447929 containerd[1519]: time="2025-09-03T23:26:58.447053895Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:58.449777 containerd[1519]: time="2025-09-03T23:26:58.449722550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:58.450878 containerd[1519]: time="2025-09-03T23:26:58.450831366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.841867008s" Sep 3 23:26:58.450930 containerd[1519]: time="2025-09-03T23:26:58.450884369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 3 23:26:58.451781 containerd[1519]: time="2025-09-03T23:26:58.451749373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 3 23:26:58.461379 containerd[1519]: time="2025-09-03T23:26:58.461331139Z" level=info msg="CreateContainer within sandbox \"da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 3 23:26:58.469299 systemd-networkd[1430]: calid9da233c8d8: Gained IPv6LL Sep 3 23:26:58.471481 containerd[1519]: time="2025-09-03T23:26:58.471438651Z" level=info msg="Container 591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:58.481298 containerd[1519]: time="2025-09-03T23:26:58.481266469Z" level=info msg="CreateContainer within sandbox \"da943d0db9ec9b7aeecaaf6260325c59eb647e1a2b3c6cde6ae5ca8088cee8d6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc\"" Sep 3 23:26:58.481839 containerd[1519]: time="2025-09-03T23:26:58.481733013Z" level=info msg="StartContainer for \"591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc\"" Sep 3 23:26:58.486758 containerd[1519]: time="2025-09-03T23:26:58.486725546Z" level=info msg="connecting to shim 591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc" address="unix:///run/containerd/s/8887052878cf07cd5097a129dec233fa036fab68511af643aded3921922ab4a7" protocol=ttrpc version=3 Sep 3 23:26:58.513360 systemd[1]: Started cri-containerd-591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc.scope - libcontainer container 591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc. Sep 3 23:26:58.559673 containerd[1519]: time="2025-09-03T23:26:58.559613640Z" level=info msg="StartContainer for \"591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc\" returns successfully" Sep 3 23:26:58.634758 kubelet[2618]: E0903 23:26:58.634672 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:58.635271 kubelet[2618]: E0903 23:26:58.635123 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:58.637695 kubelet[2618]: I0903 23:26:58.637650 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:26:58.648744 kubelet[2618]: I0903 23:26:58.648691 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5664bd6fb6-wx2wj" podStartSLOduration=21.114944377 podStartE2EDuration="23.648641832s" podCreationTimestamp="2025-09-03 23:26:35 +0000 UTC" firstStartedPulling="2025-09-03 23:26:55.917991595 +0000 UTC m=+42.587914382" lastFinishedPulling="2025-09-03 23:26:58.45168909 +0000 UTC m=+45.121611837" observedRunningTime="2025-09-03 23:26:58.647756587 +0000 UTC m=+45.317679414" watchObservedRunningTime="2025-09-03 23:26:58.648641832 +0000 UTC m=+45.318564619" Sep 3 23:26:59.429424 systemd-networkd[1430]: calie11363ba502: Gained IPv6LL Sep 3 23:26:59.636995 kubelet[2618]: E0903 23:26:59.636912 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:59.637612 kubelet[2618]: E0903 23:26:59.637041 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:26:59.688848 containerd[1519]: time="2025-09-03T23:26:59.688462747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc\" id:\"68438965041367ab2a00e3e6c9d0366c3d1aecd629dda0bf3469e11e63862dde\" pid:4949 exited_at:{seconds:1756942019 nanos:683727953}" Sep 3 23:26:59.883630 containerd[1519]: time="2025-09-03T23:26:59.883577733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:59.884101 containerd[1519]: time="2025-09-03T23:26:59.884079037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 3 23:26:59.885056 containerd[1519]: time="2025-09-03T23:26:59.885023844Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:59.886895 containerd[1519]: time="2025-09-03T23:26:59.886864455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:26:59.887871 containerd[1519]: time="2025-09-03T23:26:59.887837864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.436058769s" Sep 3 23:26:59.887871 containerd[1519]: time="2025-09-03T23:26:59.887868305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 3 23:26:59.889034 containerd[1519]: time="2025-09-03T23:26:59.889000041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 3 23:26:59.890951 containerd[1519]: time="2025-09-03T23:26:59.890915496Z" level=info msg="CreateContainer within sandbox \"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 3 23:26:59.899577 containerd[1519]: time="2025-09-03T23:26:59.898362825Z" level=info msg="Container 790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:26:59.917906 containerd[1519]: time="2025-09-03T23:26:59.917859431Z" level=info msg="CreateContainer within sandbox \"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555\"" Sep 3 23:26:59.918370 containerd[1519]: time="2025-09-03T23:26:59.918308053Z" level=info msg="StartContainer for \"790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555\"" Sep 3 23:26:59.919646 containerd[1519]: time="2025-09-03T23:26:59.919623918Z" level=info msg="connecting to shim 790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555" address="unix:///run/containerd/s/b9b7ff49ff1a961198e1e113fd51cd1d768bf9e23196c5c3f90eac76c19dae01" protocol=ttrpc version=3 Sep 3 23:26:59.940325 systemd[1]: Started cri-containerd-790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555.scope - libcontainer container 790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555. Sep 3 23:26:59.971455 containerd[1519]: time="2025-09-03T23:26:59.971410964Z" level=info msg="StartContainer for \"790faa50cdeea0c9bd83c57a01161f053dcff8d29393fc50c504e7026aa18555\" returns successfully" Sep 3 23:27:01.687819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721446855.mount: Deactivated successfully. Sep 3 23:27:01.705590 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:37216.service - OpenSSH per-connection server daemon (10.0.0.1:37216). Sep 3 23:27:01.782983 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 37216 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:01.785303 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:01.790677 systemd-logind[1491]: New session 9 of user core. Sep 3 23:27:01.799322 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 3 23:27:02.026387 sshd[5002]: Connection closed by 10.0.0.1 port 37216 Sep 3 23:27:02.027079 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:02.031753 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:37216.service: Deactivated successfully. Sep 3 23:27:02.034767 systemd[1]: session-9.scope: Deactivated successfully. Sep 3 23:27:02.035462 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Sep 3 23:27:02.036729 systemd-logind[1491]: Removed session 9. Sep 3 23:27:02.233751 containerd[1519]: time="2025-09-03T23:27:02.233424995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:02.234565 containerd[1519]: time="2025-09-03T23:27:02.234120907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 3 23:27:02.235832 containerd[1519]: time="2025-09-03T23:27:02.235804665Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:02.239761 containerd[1519]: time="2025-09-03T23:27:02.239647204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:02.240943 containerd[1519]: time="2025-09-03T23:27:02.240917943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.351885501s" Sep 3 23:27:02.241014 containerd[1519]: time="2025-09-03T23:27:02.240946864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 3 23:27:02.243271 containerd[1519]: time="2025-09-03T23:27:02.242883234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 3 23:27:02.245852 containerd[1519]: time="2025-09-03T23:27:02.245819691Z" level=info msg="CreateContainer within sandbox \"c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 3 23:27:02.256795 containerd[1519]: time="2025-09-03T23:27:02.256755480Z" level=info msg="Container 760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:02.267780 containerd[1519]: time="2025-09-03T23:27:02.267741671Z" level=info msg="CreateContainer within sandbox \"c997a482ce6e53c646277fae7eb7f3e69174b3fd622087c09af54c24f33aa7ac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\"" Sep 3 23:27:02.268342 containerd[1519]: time="2025-09-03T23:27:02.268264895Z" level=info msg="StartContainer for \"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\"" Sep 3 23:27:02.269892 containerd[1519]: time="2025-09-03T23:27:02.269866049Z" level=info msg="connecting to shim 760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432" address="unix:///run/containerd/s/78be86755daebccb6235962c20cb3d01f7d82d0e7c34718fc1c765f787802f52" protocol=ttrpc version=3 Sep 3 23:27:02.298373 systemd[1]: Started cri-containerd-760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432.scope - libcontainer container 760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432. Sep 3 23:27:02.349373 containerd[1519]: time="2025-09-03T23:27:02.349319025Z" level=info msg="StartContainer for \"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\" returns successfully" Sep 3 23:27:02.798307 containerd[1519]: time="2025-09-03T23:27:02.798242104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\" id:\"a2ae36daa72adf1bda56027900f07da876beb9cee01ec65f283a54e2b2c01bdf\" pid:5066 exit_status:1 exited_at:{seconds:1756942022 nanos:797412665}" Sep 3 23:27:03.380839 containerd[1519]: time="2025-09-03T23:27:03.380794501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:03.381691 containerd[1519]: time="2025-09-03T23:27:03.381609898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 3 23:27:03.382460 containerd[1519]: time="2025-09-03T23:27:03.382234207Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:03.384567 containerd[1519]: time="2025-09-03T23:27:03.384536912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 3 23:27:03.385757 containerd[1519]: time="2025-09-03T23:27:03.385713045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.142797449s" Sep 3 23:27:03.385757 containerd[1519]: time="2025-09-03T23:27:03.385750487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 3 23:27:03.387813 containerd[1519]: time="2025-09-03T23:27:03.387788100Z" level=info msg="CreateContainer within sandbox \"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 3 23:27:03.394592 containerd[1519]: time="2025-09-03T23:27:03.394260515Z" level=info msg="Container de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3: CDI devices from CRI Config.CDIDevices: []" Sep 3 23:27:03.403240 containerd[1519]: time="2025-09-03T23:27:03.403204843Z" level=info msg="CreateContainer within sandbox \"b7af512bee723a4d0c8d94ff46ac647eb5e91b2c90bc0df79f495f760be48713\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3\"" Sep 3 23:27:03.403719 containerd[1519]: time="2025-09-03T23:27:03.403668185Z" level=info msg="StartContainer for \"de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3\"" Sep 3 23:27:03.405236 containerd[1519]: time="2025-09-03T23:27:03.405211495Z" level=info msg="connecting to shim de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3" address="unix:///run/containerd/s/b9b7ff49ff1a961198e1e113fd51cd1d768bf9e23196c5c3f90eac76c19dae01" protocol=ttrpc version=3 Sep 3 23:27:03.433305 systemd[1]: Started cri-containerd-de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3.scope - libcontainer container de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3. Sep 3 23:27:03.488380 containerd[1519]: time="2025-09-03T23:27:03.487925309Z" level=info msg="StartContainer for \"de0d6dc3c80623a53632f09d0bf65094cfdb844b0b03d430d494e99f5b52aef3\" returns successfully" Sep 3 23:27:03.676851 kubelet[2618]: I0903 23:27:03.676712 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hldrj" podStartSLOduration=24.318765227 podStartE2EDuration="28.676695961s" podCreationTimestamp="2025-09-03 23:26:35 +0000 UTC" firstStartedPulling="2025-09-03 23:26:57.88430675 +0000 UTC m=+44.554229497" lastFinishedPulling="2025-09-03 23:27:02.242237444 +0000 UTC m=+48.912160231" observedRunningTime="2025-09-03 23:27:02.673901201 +0000 UTC m=+49.343823988" watchObservedRunningTime="2025-09-03 23:27:03.676695961 +0000 UTC m=+50.346618748" Sep 3 23:27:03.677211 kubelet[2618]: I0903 23:27:03.677093 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f6s7q" podStartSLOduration=22.286278734 podStartE2EDuration="29.677086899s" podCreationTimestamp="2025-09-03 23:26:34 +0000 UTC" firstStartedPulling="2025-09-03 23:26:55.995514508 +0000 UTC m=+42.665437295" lastFinishedPulling="2025-09-03 23:27:03.386322673 +0000 UTC m=+50.056245460" observedRunningTime="2025-09-03 23:27:03.673743066 +0000 UTC m=+50.343665853" watchObservedRunningTime="2025-09-03 23:27:03.677086899 +0000 UTC m=+50.347009686" Sep 3 23:27:03.771189 containerd[1519]: time="2025-09-03T23:27:03.771127469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\" id:\"ed504909c914a0bd3db414956f112308e222da5c4bf62487396bc0083e4cd96c\" pid:5138 exit_status:1 exited_at:{seconds:1756942023 nanos:770851377}" Sep 3 23:27:04.494685 kubelet[2618]: I0903 23:27:04.494654 2618 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 3 23:27:04.497315 kubelet[2618]: I0903 23:27:04.497283 2618 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 3 23:27:07.043706 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:37220.service - OpenSSH per-connection server daemon (10.0.0.1:37220). Sep 3 23:27:07.104685 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 37220 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:07.106070 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:07.109845 systemd-logind[1491]: New session 10 of user core. Sep 3 23:27:07.121302 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 3 23:27:07.310447 sshd[5156]: Connection closed by 10.0.0.1 port 37220 Sep 3 23:27:07.310871 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:07.325202 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:37220.service: Deactivated successfully. Sep 3 23:27:07.327759 systemd[1]: session-10.scope: Deactivated successfully. Sep 3 23:27:07.328549 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Sep 3 23:27:07.332944 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:37222.service - OpenSSH per-connection server daemon (10.0.0.1:37222). Sep 3 23:27:07.334518 systemd-logind[1491]: Removed session 10. Sep 3 23:27:07.393743 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 37222 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:07.395062 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:07.399566 systemd-logind[1491]: New session 11 of user core. Sep 3 23:27:07.403275 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 3 23:27:07.619346 sshd[5173]: Connection closed by 10.0.0.1 port 37222 Sep 3 23:27:07.620543 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:07.635560 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:37222.service: Deactivated successfully. Sep 3 23:27:07.637659 systemd[1]: session-11.scope: Deactivated successfully. Sep 3 23:27:07.639692 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Sep 3 23:27:07.644171 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:37224.service - OpenSSH per-connection server daemon (10.0.0.1:37224). Sep 3 23:27:07.645098 systemd-logind[1491]: Removed session 11. Sep 3 23:27:07.694744 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 37224 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:07.696063 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:07.700669 systemd-logind[1491]: New session 12 of user core. Sep 3 23:27:07.709328 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 3 23:27:07.866021 sshd[5187]: Connection closed by 10.0.0.1 port 37224 Sep 3 23:27:07.866341 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:07.869680 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:37224.service: Deactivated successfully. Sep 3 23:27:07.871600 systemd[1]: session-12.scope: Deactivated successfully. Sep 3 23:27:07.873340 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Sep 3 23:27:07.874612 systemd-logind[1491]: Removed session 12. Sep 3 23:27:12.878537 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:49826.service - OpenSSH per-connection server daemon (10.0.0.1:49826). Sep 3 23:27:12.930505 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 49826 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:12.931829 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:12.935697 systemd-logind[1491]: New session 13 of user core. Sep 3 23:27:12.943330 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 3 23:27:13.069248 sshd[5210]: Connection closed by 10.0.0.1 port 49826 Sep 3 23:27:13.069618 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:13.080441 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:49826.service: Deactivated successfully. Sep 3 23:27:13.082223 systemd[1]: session-13.scope: Deactivated successfully. Sep 3 23:27:13.082987 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Sep 3 23:27:13.085391 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:49834.service - OpenSSH per-connection server daemon (10.0.0.1:49834). Sep 3 23:27:13.086463 systemd-logind[1491]: Removed session 13. Sep 3 23:27:13.135898 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 49834 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:13.137096 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:13.141126 systemd-logind[1491]: New session 14 of user core. Sep 3 23:27:13.157323 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 3 23:27:13.352407 sshd[5225]: Connection closed by 10.0.0.1 port 49834 Sep 3 23:27:13.352889 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:13.363893 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:49834.service: Deactivated successfully. Sep 3 23:27:13.366510 systemd[1]: session-14.scope: Deactivated successfully. Sep 3 23:27:13.367350 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Sep 3 23:27:13.370019 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:49844.service - OpenSSH per-connection server daemon (10.0.0.1:49844). Sep 3 23:27:13.370634 systemd-logind[1491]: Removed session 14. Sep 3 23:27:13.431331 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 49844 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:13.432517 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:13.436261 systemd-logind[1491]: New session 15 of user core. Sep 3 23:27:13.443304 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 3 23:27:13.680138 kubelet[2618]: I0903 23:27:13.677343 2618 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 3 23:27:14.067318 sshd[5242]: Connection closed by 10.0.0.1 port 49844 Sep 3 23:27:14.068281 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:14.077041 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:49844.service: Deactivated successfully. Sep 3 23:27:14.080258 systemd[1]: session-15.scope: Deactivated successfully. Sep 3 23:27:14.083182 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Sep 3 23:27:14.089671 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:49856.service - OpenSSH per-connection server daemon (10.0.0.1:49856). Sep 3 23:27:14.091133 systemd-logind[1491]: Removed session 15. Sep 3 23:27:14.142362 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 49856 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:14.143691 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:14.147842 systemd-logind[1491]: New session 16 of user core. Sep 3 23:27:14.159314 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 3 23:27:14.467237 sshd[5264]: Connection closed by 10.0.0.1 port 49856 Sep 3 23:27:14.467691 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:14.476896 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:49856.service: Deactivated successfully. Sep 3 23:27:14.480658 systemd[1]: session-16.scope: Deactivated successfully. Sep 3 23:27:14.481396 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Sep 3 23:27:14.484603 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:49868.service - OpenSSH per-connection server daemon (10.0.0.1:49868). Sep 3 23:27:14.485638 systemd-logind[1491]: Removed session 16. Sep 3 23:27:14.542595 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 49868 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:14.543927 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:14.549827 systemd-logind[1491]: New session 17 of user core. Sep 3 23:27:14.558328 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 3 23:27:14.703883 sshd[5278]: Connection closed by 10.0.0.1 port 49868 Sep 3 23:27:14.704422 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:14.707661 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:49868.service: Deactivated successfully. Sep 3 23:27:14.709477 systemd[1]: session-17.scope: Deactivated successfully. Sep 3 23:27:14.710261 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Sep 3 23:27:14.711442 systemd-logind[1491]: Removed session 17. Sep 3 23:27:19.632484 containerd[1519]: time="2025-09-03T23:27:19.632088685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c302c714faa71c8ba3afd986401d44ddbe242c0445e1a1d03f86eb8f3c323c09\" id:\"54f18c33e5f1f9990e570aa7db68a9d528abc697224d36ba0772de64920813cb\" pid:5309 exited_at:{seconds:1756942039 nanos:631614427}" Sep 3 23:27:19.719444 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:49878.service - OpenSSH per-connection server daemon (10.0.0.1:49878). Sep 3 23:27:19.793391 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 49878 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:19.794757 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:19.800236 systemd-logind[1491]: New session 18 of user core. Sep 3 23:27:19.810340 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 3 23:27:19.991789 sshd[5324]: Connection closed by 10.0.0.1 port 49878 Sep 3 23:27:19.992291 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:19.995654 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:49878.service: Deactivated successfully. Sep 3 23:27:19.997385 systemd[1]: session-18.scope: Deactivated successfully. Sep 3 23:27:19.998092 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Sep 3 23:27:19.999397 systemd-logind[1491]: Removed session 18. Sep 3 23:27:25.006816 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:52122.service - OpenSSH per-connection server daemon (10.0.0.1:52122). Sep 3 23:27:25.058313 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 52122 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:25.059628 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:25.063386 systemd-logind[1491]: New session 19 of user core. Sep 3 23:27:25.073319 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 3 23:27:25.196456 sshd[5345]: Connection closed by 10.0.0.1 port 52122 Sep 3 23:27:25.196796 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:25.200171 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:52122.service: Deactivated successfully. Sep 3 23:27:25.202076 systemd[1]: session-19.scope: Deactivated successfully. Sep 3 23:27:25.203709 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Sep 3 23:27:25.205449 systemd-logind[1491]: Removed session 19. Sep 3 23:27:25.423266 kubelet[2618]: E0903 23:27:25.423097 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:27:29.422173 kubelet[2618]: E0903 23:27:29.422102 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 3 23:27:29.679002 containerd[1519]: time="2025-09-03T23:27:29.678893841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591bdec34d6b293529875d85cfdfb668725b1b90e64e7d9ea84f84fd7639c0bc\" id:\"d097a5198e4c1f5bf4fca50257dc305cad8913ff9cfb2454cba693240761c481\" pid:5375 exited_at:{seconds:1756942049 nanos:678691006}" Sep 3 23:27:30.213824 systemd[1]: Started sshd@19-10.0.0.63:22-10.0.0.1:45944.service - OpenSSH per-connection server daemon (10.0.0.1:45944). Sep 3 23:27:30.277057 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 45944 ssh2: RSA SHA256:xd5P2EY0SShpzmSaqqMMlsC8/eUu2H3GFJ+XdJbOcTI Sep 3 23:27:30.278481 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 3 23:27:30.283236 systemd-logind[1491]: New session 20 of user core. Sep 3 23:27:30.294373 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 3 23:27:30.451231 sshd[5388]: Connection closed by 10.0.0.1 port 45944 Sep 3 23:27:30.451564 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Sep 3 23:27:30.454997 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Sep 3 23:27:30.455317 systemd[1]: sshd@19-10.0.0.63:22-10.0.0.1:45944.service: Deactivated successfully. Sep 3 23:27:30.457170 systemd[1]: session-20.scope: Deactivated successfully. Sep 3 23:27:30.459039 systemd-logind[1491]: Removed session 20. Sep 3 23:27:31.165407 containerd[1519]: time="2025-09-03T23:27:31.165372502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"760013f6df14fb765a9e32e2513cc3ffc084a575243bc777bcaf20e0b7a84432\" id:\"a77552501730b88796c02cd3e7620c370dd2a8a02d11a14ff7812b66b65d100f\" pid:5411 exited_at:{seconds:1756942051 nanos:164898392}"