Sep 16 04:37:45.760945 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 16 04:37:45.760966 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:37:45.760976 kernel: KASLR enabled Sep 16 04:37:45.760982 kernel: efi: EFI v2.7 by EDK II Sep 16 04:37:45.760987 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 16 04:37:45.760993 kernel: random: crng init done Sep 16 04:37:45.761000 kernel: secureboot: Secure boot disabled Sep 16 04:37:45.761005 kernel: ACPI: Early table checksum verification disabled Sep 16 04:37:45.761011 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 16 04:37:45.761018 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:37:45.761024 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761030 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761036 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761042 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761049 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761056 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761063 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761069 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761075 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:37:45.761081 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 16 04:37:45.761087 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:37:45.761094 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:37:45.761100 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 16 04:37:45.761106 kernel: Zone ranges: Sep 16 04:37:45.761112 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:37:45.761119 kernel: DMA32 empty Sep 16 04:37:45.761125 kernel: Normal empty Sep 16 04:37:45.761131 kernel: Device empty Sep 16 04:37:45.761137 kernel: Movable zone start for each node Sep 16 04:37:45.761144 kernel: Early memory node ranges Sep 16 04:37:45.761150 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 16 04:37:45.761156 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 16 04:37:45.761162 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 16 04:37:45.761168 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 16 04:37:45.761175 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 16 04:37:45.761180 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 16 04:37:45.761187 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 16 04:37:45.761194 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 16 04:37:45.761200 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 16 04:37:45.761206 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 16 04:37:45.761215 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 16 04:37:45.761222 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 16 04:37:45.761228 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 16 04:37:45.761236 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 16 04:37:45.761242 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 16 04:37:45.761249 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 16 04:37:45.761255 kernel: psci: probing for conduit method from ACPI. Sep 16 04:37:45.761262 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:37:45.761268 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:37:45.761275 kernel: psci: Trusted OS migration not required Sep 16 04:37:45.761281 kernel: psci: SMC Calling Convention v1.1 Sep 16 04:37:45.761287 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 16 04:37:45.761294 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:37:45.761302 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:37:45.761308 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 16 04:37:45.761315 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:37:45.761321 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:37:45.761328 kernel: CPU features: detected: Spectre-v4 Sep 16 04:37:45.761334 kernel: CPU features: detected: Spectre-BHB Sep 16 04:37:45.761341 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:37:45.761347 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:37:45.761353 kernel: CPU features: detected: ARM erratum 1418040 Sep 16 04:37:45.761360 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:37:45.761366 kernel: alternatives: applying boot alternatives Sep 16 04:37:45.761374 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:37:45.761382 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:37:45.761389 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:37:45.761396 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:37:45.761402 kernel: Fallback order for Node 0: 0 Sep 16 04:37:45.761409 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 16 04:37:45.761415 kernel: Policy zone: DMA Sep 16 04:37:45.761422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:37:45.761428 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 16 04:37:45.761434 kernel: software IO TLB: area num 4. Sep 16 04:37:45.761441 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 16 04:37:45.761447 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 16 04:37:45.761455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:37:45.761462 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:37:45.761469 kernel: rcu: RCU event tracing is enabled. Sep 16 04:37:45.761476 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:37:45.761482 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:37:45.761489 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:37:45.761495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:37:45.761502 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:37:45.761509 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:37:45.761546 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:37:45.761553 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:37:45.761562 kernel: GICv3: 256 SPIs implemented Sep 16 04:37:45.761568 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:37:45.761575 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:37:45.761581 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 16 04:37:45.761588 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 16 04:37:45.761594 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 16 04:37:45.761601 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 16 04:37:45.761607 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 16 04:37:45.761614 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 16 04:37:45.761620 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 16 04:37:45.761627 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 16 04:37:45.761633 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:37:45.761641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:37:45.761647 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 16 04:37:45.761654 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 16 04:37:45.761661 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 16 04:37:45.761667 kernel: arm-pv: using stolen time PV Sep 16 04:37:45.761674 kernel: Console: colour dummy device 80x25 Sep 16 04:37:45.761680 kernel: ACPI: Core revision 20240827 Sep 16 04:37:45.761687 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 16 04:37:45.761694 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:37:45.761701 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:37:45.761709 kernel: landlock: Up and running. Sep 16 04:37:45.761715 kernel: SELinux: Initializing. Sep 16 04:37:45.761722 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:37:45.761729 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:37:45.761735 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:37:45.761742 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:37:45.761749 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:37:45.761755 kernel: Remapping and enabling EFI services. Sep 16 04:37:45.761762 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:37:45.761774 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:37:45.761782 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 16 04:37:45.761789 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 16 04:37:45.761797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:37:45.761804 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 16 04:37:45.761811 kernel: Detected PIPT I-cache on CPU2 Sep 16 04:37:45.761818 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 16 04:37:45.761825 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 16 04:37:45.761834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:37:45.761840 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 16 04:37:45.761848 kernel: Detected PIPT I-cache on CPU3 Sep 16 04:37:45.761855 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 16 04:37:45.761862 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 16 04:37:45.761869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 16 04:37:45.761875 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 16 04:37:45.761882 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:37:45.761889 kernel: SMP: Total of 4 processors activated. Sep 16 04:37:45.761898 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:37:45.761905 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:37:45.761912 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:37:45.761919 kernel: CPU features: detected: Common not Private translations Sep 16 04:37:45.761926 kernel: CPU features: detected: CRC32 instructions Sep 16 04:37:45.761934 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 16 04:37:45.761941 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:37:45.761948 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:37:45.761955 kernel: CPU features: detected: Privileged Access Never Sep 16 04:37:45.761963 kernel: CPU features: detected: RAS Extension Support Sep 16 04:37:45.761970 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:37:45.761977 kernel: alternatives: applying system-wide alternatives Sep 16 04:37:45.761985 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 16 04:37:45.761992 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 16 04:37:45.761999 kernel: devtmpfs: initialized Sep 16 04:37:45.762007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:37:45.762014 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:37:45.762021 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:37:45.762029 kernel: 0 pages in range for non-PLT usage Sep 16 04:37:45.762036 kernel: 508560 pages in range for PLT usage Sep 16 04:37:45.762044 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:37:45.762050 kernel: SMBIOS 3.0.0 present. Sep 16 04:37:45.762058 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 16 04:37:45.762065 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:37:45.762072 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:37:45.762079 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:37:45.762086 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:37:45.762095 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:37:45.762102 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:37:45.762109 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 16 04:37:45.762116 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:37:45.762123 kernel: cpuidle: using governor menu Sep 16 04:37:45.762130 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:37:45.762137 kernel: ASID allocator initialised with 32768 entries Sep 16 04:37:45.762144 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:37:45.762151 kernel: Serial: AMBA PL011 UART driver Sep 16 04:37:45.762159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:37:45.762166 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:37:45.762174 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:37:45.762181 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:37:45.762188 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:37:45.762195 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:37:45.762202 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:37:45.762209 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:37:45.762216 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:37:45.762224 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:37:45.762231 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:37:45.762238 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:37:45.762245 kernel: ACPI: Interpreter enabled Sep 16 04:37:45.762252 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:37:45.762259 kernel: ACPI: MCFG table detected, 1 entries Sep 16 04:37:45.762266 kernel: ACPI: CPU0 has been hot-added Sep 16 04:37:45.762273 kernel: ACPI: CPU1 has been hot-added Sep 16 04:37:45.762280 kernel: ACPI: CPU2 has been hot-added Sep 16 04:37:45.762288 kernel: ACPI: CPU3 has been hot-added Sep 16 04:37:45.762296 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:37:45.762303 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:37:45.762311 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:37:45.762460 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:37:45.762643 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 16 04:37:45.762713 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 16 04:37:45.762772 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 16 04:37:45.762835 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 16 04:37:45.762844 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 16 04:37:45.762851 kernel: PCI host bridge to bus 0000:00 Sep 16 04:37:45.762920 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 16 04:37:45.762974 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 16 04:37:45.763027 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 16 04:37:45.763079 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:37:45.763160 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:37:45.763231 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:37:45.763294 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 16 04:37:45.763354 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 16 04:37:45.763413 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 16 04:37:45.763473 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 16 04:37:45.763556 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 16 04:37:45.763624 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 16 04:37:45.763679 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 16 04:37:45.763731 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 16 04:37:45.763784 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 16 04:37:45.763793 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 16 04:37:45.763800 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 16 04:37:45.763808 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 16 04:37:45.763817 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 16 04:37:45.763824 kernel: iommu: Default domain type: Translated Sep 16 04:37:45.763832 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:37:45.763839 kernel: efivars: Registered efivars operations Sep 16 04:37:45.763846 kernel: vgaarb: loaded Sep 16 04:37:45.763853 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:37:45.763860 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:37:45.763867 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:37:45.763874 kernel: pnp: PnP ACPI init Sep 16 04:37:45.763942 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 16 04:37:45.763952 kernel: pnp: PnP ACPI: found 1 devices Sep 16 04:37:45.763960 kernel: NET: Registered PF_INET protocol family Sep 16 04:37:45.763967 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:37:45.763974 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:37:45.763981 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:37:45.763988 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:37:45.763996 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:37:45.764005 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:37:45.764012 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:37:45.764020 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:37:45.764027 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:37:45.764034 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:37:45.764041 kernel: kvm [1]: HYP mode not available Sep 16 04:37:45.764048 kernel: Initialise system trusted keyrings Sep 16 04:37:45.764055 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:37:45.764062 kernel: Key type asymmetric registered Sep 16 04:37:45.764071 kernel: Asymmetric key parser 'x509' registered Sep 16 04:37:45.764078 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:37:45.764085 kernel: io scheduler mq-deadline registered Sep 16 04:37:45.764092 kernel: io scheduler kyber registered Sep 16 04:37:45.764100 kernel: io scheduler bfq registered Sep 16 04:37:45.764107 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 16 04:37:45.764114 kernel: ACPI: button: Power Button [PWRB] Sep 16 04:37:45.764122 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 16 04:37:45.764181 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 16 04:37:45.764192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:37:45.764199 kernel: thunder_xcv, ver 1.0 Sep 16 04:37:45.764206 kernel: thunder_bgx, ver 1.0 Sep 16 04:37:45.764213 kernel: nicpf, ver 1.0 Sep 16 04:37:45.764220 kernel: nicvf, ver 1.0 Sep 16 04:37:45.764294 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:37:45.764353 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:37:45 UTC (1757997465) Sep 16 04:37:45.764362 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:37:45.764371 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:37:45.764379 kernel: watchdog: NMI not fully supported Sep 16 04:37:45.764386 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:37:45.764393 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:37:45.764400 kernel: Segment Routing with IPv6 Sep 16 04:37:45.764407 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:37:45.764414 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:37:45.764421 kernel: Key type dns_resolver registered Sep 16 04:37:45.764429 kernel: registered taskstats version 1 Sep 16 04:37:45.764436 kernel: Loading compiled-in X.509 certificates Sep 16 04:37:45.764445 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:37:45.764452 kernel: Demotion targets for Node 0: null Sep 16 04:37:45.764459 kernel: Key type .fscrypt registered Sep 16 04:37:45.764466 kernel: Key type fscrypt-provisioning registered Sep 16 04:37:45.764473 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:37:45.764480 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:37:45.764487 kernel: ima: No architecture policies found Sep 16 04:37:45.764494 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:37:45.764503 kernel: clk: Disabling unused clocks Sep 16 04:37:45.764536 kernel: PM: genpd: Disabling unused power domains Sep 16 04:37:45.764546 kernel: Warning: unable to open an initial console. Sep 16 04:37:45.764553 kernel: Freeing unused kernel memory: 38976K Sep 16 04:37:45.764560 kernel: Run /init as init process Sep 16 04:37:45.764567 kernel: with arguments: Sep 16 04:37:45.764575 kernel: /init Sep 16 04:37:45.764581 kernel: with environment: Sep 16 04:37:45.764588 kernel: HOME=/ Sep 16 04:37:45.764599 kernel: TERM=linux Sep 16 04:37:45.764606 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:37:45.764614 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:37:45.764625 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:37:45.764633 systemd[1]: Detected virtualization kvm. Sep 16 04:37:45.764640 systemd[1]: Detected architecture arm64. Sep 16 04:37:45.764648 systemd[1]: Running in initrd. Sep 16 04:37:45.764656 systemd[1]: No hostname configured, using default hostname. Sep 16 04:37:45.764665 systemd[1]: Hostname set to . Sep 16 04:37:45.764672 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:37:45.764680 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:37:45.764688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:37:45.764695 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:37:45.764704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:37:45.764712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:37:45.764719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:37:45.764729 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:37:45.764738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:37:45.764747 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:37:45.764754 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:37:45.764762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:37:45.764770 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:37:45.764779 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:37:45.764787 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:37:45.764795 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:37:45.764803 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:37:45.764811 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:37:45.764819 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:37:45.764826 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:37:45.764834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:37:45.764842 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:37:45.764852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:37:45.764860 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:37:45.764868 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:37:45.764876 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:37:45.764884 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:37:45.764896 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:37:45.764904 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:37:45.764912 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:37:45.764921 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:37:45.764930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:37:45.764937 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:37:45.764946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:37:45.764956 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:37:45.764966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:37:45.765001 systemd-journald[245]: Collecting audit messages is disabled. Sep 16 04:37:45.765021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:37:45.765031 systemd-journald[245]: Journal started Sep 16 04:37:45.765053 systemd-journald[245]: Runtime Journal (/run/log/journal/708f835ee94b4234ada7f114919d4756) is 6M, max 48.5M, 42.4M free. Sep 16 04:37:45.754377 systemd-modules-load[246]: Inserted module 'overlay' Sep 16 04:37:45.769233 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:37:45.769260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:37:45.771174 kernel: Bridge firewalling registered Sep 16 04:37:45.771218 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:37:45.770494 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 16 04:37:45.782687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:37:45.783761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:37:45.788024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:37:45.789387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:37:45.802180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:37:45.803606 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:37:45.805942 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:37:45.810009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:37:45.811907 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:37:45.814878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:37:45.816930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:37:45.826642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:37:45.838333 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:37:45.856008 systemd-resolved[292]: Positive Trust Anchors: Sep 16 04:37:45.856029 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:37:45.856060 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:37:45.860976 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 16 04:37:45.861982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:37:45.863591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:37:45.915552 kernel: SCSI subsystem initialized Sep 16 04:37:45.920542 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:37:45.928570 kernel: iscsi: registered transport (tcp) Sep 16 04:37:45.941549 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:37:45.941593 kernel: QLogic iSCSI HBA Driver Sep 16 04:37:45.960593 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:37:45.986573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:37:45.988614 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:37:46.036081 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:37:46.038399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:37:46.101557 kernel: raid6: neonx8 gen() 15779 MB/s Sep 16 04:37:46.118538 kernel: raid6: neonx4 gen() 15793 MB/s Sep 16 04:37:46.135549 kernel: raid6: neonx2 gen() 13198 MB/s Sep 16 04:37:46.152547 kernel: raid6: neonx1 gen() 10456 MB/s Sep 16 04:37:46.169536 kernel: raid6: int64x8 gen() 6892 MB/s Sep 16 04:37:46.186541 kernel: raid6: int64x4 gen() 7344 MB/s Sep 16 04:37:46.203539 kernel: raid6: int64x2 gen() 6099 MB/s Sep 16 04:37:46.220535 kernel: raid6: int64x1 gen() 5049 MB/s Sep 16 04:37:46.220554 kernel: raid6: using algorithm neonx4 gen() 15793 MB/s Sep 16 04:37:46.237553 kernel: raid6: .... xor() 12350 MB/s, rmw enabled Sep 16 04:37:46.237580 kernel: raid6: using neon recovery algorithm Sep 16 04:37:46.242711 kernel: xor: measuring software checksum speed Sep 16 04:37:46.242732 kernel: 8regs : 21522 MB/sec Sep 16 04:37:46.243745 kernel: 32regs : 21693 MB/sec Sep 16 04:37:46.243770 kernel: arm64_neon : 28147 MB/sec Sep 16 04:37:46.243788 kernel: xor: using function: arm64_neon (28147 MB/sec) Sep 16 04:37:46.297573 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:37:46.304307 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:37:46.306856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:37:46.334621 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 16 04:37:46.339033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:37:46.340848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:37:46.365924 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 16 04:37:46.390800 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:37:46.393036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:37:46.448285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:37:46.450394 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:37:46.498073 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 16 04:37:46.498281 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:37:46.504754 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:37:46.504807 kernel: GPT:9289727 != 19775487 Sep 16 04:37:46.504819 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:37:46.505710 kernel: GPT:9289727 != 19775487 Sep 16 04:37:46.505736 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:37:46.506533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:37:46.513761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:37:46.513906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:37:46.527480 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:37:46.529401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:37:46.549564 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:37:46.550711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:37:46.552663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:37:46.560463 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:37:46.574132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:37:46.580117 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:37:46.581088 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:37:46.583047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:37:46.585494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:37:46.587331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:37:46.589885 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:37:46.591612 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:37:46.615545 disk-uuid[592]: Primary Header is updated. Sep 16 04:37:46.615545 disk-uuid[592]: Secondary Entries is updated. Sep 16 04:37:46.615545 disk-uuid[592]: Secondary Header is updated. Sep 16 04:37:46.620458 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:37:46.622403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:37:46.626539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:37:47.627549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:37:47.628102 disk-uuid[596]: The operation has completed successfully. Sep 16 04:37:47.648771 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:37:47.648881 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:37:47.679223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:37:47.691771 sh[611]: Success Sep 16 04:37:47.704902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:37:47.704953 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:37:47.704964 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:37:47.712536 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:37:47.736479 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:37:47.739304 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:37:47.755021 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:37:47.760807 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (623) Sep 16 04:37:47.760848 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:37:47.760858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:37:47.764991 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:37:47.765018 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:37:47.766170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:37:47.767345 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:37:47.768350 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:37:47.769163 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:37:47.771955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:37:47.795561 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (653) Sep 16 04:37:47.797536 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:37:47.797570 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:37:47.799928 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:37:47.799981 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:37:47.804646 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:37:47.805761 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:37:47.808285 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:37:47.878582 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:37:47.881375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:37:47.915921 ignition[697]: Ignition 2.22.0 Sep 16 04:37:47.915943 ignition[697]: Stage: fetch-offline Sep 16 04:37:47.915976 ignition[697]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:47.915984 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:47.916081 ignition[697]: parsed url from cmdline: "" Sep 16 04:37:47.920053 systemd-networkd[804]: lo: Link UP Sep 16 04:37:47.916085 ignition[697]: no config URL provided Sep 16 04:37:47.920057 systemd-networkd[804]: lo: Gained carrier Sep 16 04:37:47.916090 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:37:47.920784 systemd-networkd[804]: Enumeration completed Sep 16 04:37:47.916097 ignition[697]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:37:47.921192 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:37:47.916116 ignition[697]: op(1): [started] loading QEMU firmware config module Sep 16 04:37:47.921224 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:37:47.916122 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:37:47.921228 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:37:47.924632 ignition[697]: op(1): [finished] loading QEMU firmware config module Sep 16 04:37:47.922016 systemd-networkd[804]: eth0: Link UP Sep 16 04:37:47.922103 systemd-networkd[804]: eth0: Gained carrier Sep 16 04:37:47.922113 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:37:47.922485 systemd[1]: Reached target network.target - Network. Sep 16 04:37:47.941569 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:37:47.973787 ignition[697]: parsing config with SHA512: 26d6ca01796c0474af3d7c2017d46be34d4b8b22b21dfd624d46c7928dbfb3c436f08c0bd435aaf889f2535662c9a0cb18f8eb0cbc8a4fbcf935b0d4df52015d Sep 16 04:37:47.979692 unknown[697]: fetched base config from "system" Sep 16 04:37:47.979704 unknown[697]: fetched user config from "qemu" Sep 16 04:37:47.980096 ignition[697]: fetch-offline: fetch-offline passed Sep 16 04:37:47.980159 ignition[697]: Ignition finished successfully Sep 16 04:37:47.982445 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:37:47.983949 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:37:47.984792 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:37:48.023456 ignition[811]: Ignition 2.22.0 Sep 16 04:37:48.023479 ignition[811]: Stage: kargs Sep 16 04:37:48.023668 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:48.023678 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:48.025072 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.119 Sep 16 04:37:48.024649 ignition[811]: kargs: kargs passed Sep 16 04:37:48.025081 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 16 04:37:48.024702 ignition[811]: Ignition finished successfully Sep 16 04:37:48.028039 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:37:48.029927 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:37:48.061131 ignition[819]: Ignition 2.22.0 Sep 16 04:37:48.061150 ignition[819]: Stage: disks Sep 16 04:37:48.061312 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:48.061321 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:48.064294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:37:48.062140 ignition[819]: disks: disks passed Sep 16 04:37:48.066163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:37:48.062188 ignition[819]: Ignition finished successfully Sep 16 04:37:48.067493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:37:48.068872 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:37:48.070308 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:37:48.071663 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:37:48.074394 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:37:48.105124 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:37:48.110552 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:37:48.113636 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:37:48.172545 kernel: EXT4-fs (vda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:37:48.172701 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:37:48.173855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:37:48.176808 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:37:48.179056 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:37:48.179975 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:37:48.180031 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:37:48.180058 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:37:48.189369 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:37:48.191634 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:37:48.196057 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 16 04:37:48.196092 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:37:48.196102 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:37:48.198635 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:37:48.198663 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:37:48.200104 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:37:48.237059 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:37:48.241589 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:37:48.245727 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:37:48.249801 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:37:48.323530 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:37:48.325407 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:37:48.327017 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:37:48.349577 kernel: BTRFS info (device vda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:37:48.359801 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:37:48.383632 ignition[953]: INFO : Ignition 2.22.0 Sep 16 04:37:48.383632 ignition[953]: INFO : Stage: mount Sep 16 04:37:48.385048 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:48.385048 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:48.385048 ignition[953]: INFO : mount: mount passed Sep 16 04:37:48.385048 ignition[953]: INFO : Ignition finished successfully Sep 16 04:37:48.387065 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:37:48.389443 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:37:48.759832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:37:48.761350 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:37:48.781391 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Sep 16 04:37:48.781439 kernel: BTRFS info (device vda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:37:48.781449 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:37:48.784599 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:37:48.784643 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:37:48.786044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:37:48.819064 ignition[981]: INFO : Ignition 2.22.0 Sep 16 04:37:48.819064 ignition[981]: INFO : Stage: files Sep 16 04:37:48.820612 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:48.820612 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:48.820612 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:37:48.823353 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:37:48.823353 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:37:48.825588 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:37:48.825588 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:37:48.825588 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:37:48.825396 unknown[981]: wrote ssh authorized keys file for user: core Sep 16 04:37:48.829603 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 16 04:37:48.829603 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 16 04:37:49.385756 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:37:49.829752 systemd-networkd[804]: eth0: Gained IPv6LL Sep 16 04:37:50.102729 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 16 04:37:50.102729 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:37:50.106168 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:37:50.115816 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 16 04:37:50.495500 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:37:51.004750 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 16 04:37:51.004750 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:37:51.008195 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 16 04:37:51.012072 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:37:51.025866 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:37:51.030172 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:37:51.030172 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:37:51.030172 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:37:51.030172 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:37:51.030172 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:37:51.030172 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:37:51.030172 ignition[981]: INFO : files: files passed Sep 16 04:37:51.030172 ignition[981]: INFO : Ignition finished successfully Sep 16 04:37:51.032024 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:37:51.036282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:37:51.040484 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:37:51.048939 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:37:51.049068 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:37:51.051662 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:37:51.053264 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:37:51.053264 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:37:51.056161 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:37:51.056061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:37:51.057571 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:37:51.060661 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:37:51.095272 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:37:51.095420 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:37:51.097332 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:37:51.098667 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:37:51.100189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:37:51.101090 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:37:51.115590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:37:51.118025 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:37:51.144875 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:37:51.145930 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:37:51.147398 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:37:51.148940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:37:51.149068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:37:51.151256 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:37:51.153030 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:37:51.154863 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:37:51.156307 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:37:51.158003 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:37:51.159766 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:37:51.161472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:37:51.163214 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:37:51.164893 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:37:51.166504 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:37:51.168115 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:37:51.169385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:37:51.169530 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:37:51.171684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:37:51.173647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:37:51.175299 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:37:51.178618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:37:51.179653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:37:51.179777 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:37:51.182047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:37:51.182159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:37:51.183820 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:37:51.185136 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:37:51.188580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:37:51.189632 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:37:51.191532 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:37:51.192864 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:37:51.192950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:37:51.194337 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:37:51.194410 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:37:51.195811 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:37:51.195973 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:37:51.197348 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:37:51.197453 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:37:51.199693 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:37:51.201982 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:37:51.202766 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:37:51.202878 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:37:51.204461 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:37:51.204576 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:37:51.209433 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:37:51.212709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:37:51.221958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:37:51.228893 ignition[1036]: INFO : Ignition 2.22.0 Sep 16 04:37:51.228893 ignition[1036]: INFO : Stage: umount Sep 16 04:37:51.232756 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:37:51.232756 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:37:51.232756 ignition[1036]: INFO : umount: umount passed Sep 16 04:37:51.232756 ignition[1036]: INFO : Ignition finished successfully Sep 16 04:37:51.231801 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:37:51.233548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:37:51.236795 systemd[1]: Stopped target network.target - Network. Sep 16 04:37:51.237902 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:37:51.237970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:37:51.239251 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:37:51.239287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:37:51.240729 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:37:51.240787 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:37:51.242089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:37:51.242126 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:37:51.243623 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:37:51.245067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:37:51.253991 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:37:51.254128 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:37:51.257363 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:37:51.257707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:37:51.257748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:37:51.261241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:37:51.261457 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:37:51.261601 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:37:51.266004 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:37:51.266471 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:37:51.271719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:37:51.271766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:37:51.274432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:37:51.275480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:37:51.275567 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:37:51.278598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:37:51.278645 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:37:51.281067 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:37:51.281111 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:37:51.283061 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:37:51.286923 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:37:51.287218 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:37:51.287311 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:37:51.290201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:37:51.290264 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:37:51.305189 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:37:51.305341 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:37:51.307351 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:37:51.307465 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:37:51.309414 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:37:51.309483 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:37:51.310402 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:37:51.310432 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:37:51.312007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:37:51.312055 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:37:51.314319 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:37:51.314367 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:37:51.316670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:37:51.316722 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:37:51.320053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:37:51.321551 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:37:51.321611 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:37:51.324542 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:37:51.324592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:37:51.327491 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 04:37:51.327553 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:37:51.330459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:37:51.330515 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:37:51.332643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:37:51.332688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:37:51.341186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:37:51.341292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:37:51.343900 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:37:51.346010 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:37:51.370747 systemd[1]: Switching root. Sep 16 04:37:51.407657 systemd-journald[245]: Journal stopped Sep 16 04:37:52.148225 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 16 04:37:52.148276 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:37:52.148289 kernel: SELinux: policy capability open_perms=1 Sep 16 04:37:52.148298 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:37:52.148310 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:37:52.148320 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:37:52.148329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:37:52.148337 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:37:52.148347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:37:52.148356 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:37:52.148365 kernel: audit: type=1403 audit(1757997471.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:37:52.148381 systemd[1]: Successfully loaded SELinux policy in 62.978ms. Sep 16 04:37:52.148397 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.731ms. Sep 16 04:37:52.148409 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:37:52.148420 systemd[1]: Detected virtualization kvm. Sep 16 04:37:52.148432 systemd[1]: Detected architecture arm64. Sep 16 04:37:52.148442 systemd[1]: Detected first boot. Sep 16 04:37:52.148452 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:37:52.148462 zram_generator::config[1082]: No configuration found. Sep 16 04:37:52.148474 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:37:52.148495 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:37:52.148508 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:37:52.148532 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:37:52.148543 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:37:52.148553 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:37:52.148563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:37:52.148573 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:37:52.148583 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:37:52.148593 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:37:52.148603 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:37:52.148615 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:37:52.148625 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:37:52.148635 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:37:52.148645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:37:52.148655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:37:52.148665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:37:52.148676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:37:52.148687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:37:52.148697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:37:52.148708 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:37:52.148718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:37:52.148728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:37:52.148738 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:37:52.148748 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:37:52.148759 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:37:52.148769 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:37:52.148780 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:37:52.148790 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:37:52.148801 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:37:52.148811 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:37:52.148821 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:37:52.148831 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:37:52.148841 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:37:52.148852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:37:52.148862 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:37:52.148873 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:37:52.148884 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:37:52.148895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:37:52.148905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:37:52.148915 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:37:52.148925 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:37:52.148934 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:37:52.148944 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:37:52.148959 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:37:52.148970 systemd[1]: Reached target machines.target - Containers. Sep 16 04:37:52.148981 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:37:52.148991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:37:52.149001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:37:52.149011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:37:52.149021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:37:52.149030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:37:52.149040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:37:52.149050 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:37:52.149061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:37:52.149071 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:37:52.149085 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:37:52.149095 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:37:52.149105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:37:52.149118 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:37:52.149128 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:37:52.149138 kernel: fuse: init (API version 7.41) Sep 16 04:37:52.149149 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:37:52.149164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:37:52.149180 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:37:52.149190 kernel: loop: module loaded Sep 16 04:37:52.149199 kernel: ACPI: bus type drm_connector registered Sep 16 04:37:52.149208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:37:52.149219 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:37:52.149229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:37:52.149241 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:37:52.149250 systemd[1]: Stopped verity-setup.service. Sep 16 04:37:52.149260 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:37:52.149270 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:37:52.149300 systemd-journald[1158]: Collecting audit messages is disabled. Sep 16 04:37:52.149327 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:37:52.149338 systemd-journald[1158]: Journal started Sep 16 04:37:52.149359 systemd-journald[1158]: Runtime Journal (/run/log/journal/708f835ee94b4234ada7f114919d4756) is 6M, max 48.5M, 42.4M free. Sep 16 04:37:51.931818 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:37:51.953723 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:37:51.954134 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:37:52.151870 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:37:52.152534 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:37:52.153503 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:37:52.154598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:37:52.157549 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:37:52.158756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:37:52.159965 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:37:52.160140 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:37:52.161337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:37:52.161504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:37:52.162615 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:37:52.162768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:37:52.163775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:37:52.163938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:37:52.165112 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:37:52.165270 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:37:52.166595 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:37:52.166747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:37:52.167808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:37:52.168919 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:37:52.170403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:37:52.171721 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:37:52.184026 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:37:52.186222 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:37:52.188114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:37:52.189073 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:37:52.189100 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:37:52.190796 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:37:52.201394 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:37:52.202400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:37:52.203738 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:37:52.205762 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:37:52.206847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:37:52.208186 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:37:52.209307 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:37:52.210676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:37:52.214149 systemd-journald[1158]: Time spent on flushing to /var/log/journal/708f835ee94b4234ada7f114919d4756 is 15.768ms for 888 entries. Sep 16 04:37:52.214149 systemd-journald[1158]: System Journal (/var/log/journal/708f835ee94b4234ada7f114919d4756) is 8M, max 195.6M, 187.6M free. Sep 16 04:37:52.238997 systemd-journald[1158]: Received client request to flush runtime journal. Sep 16 04:37:52.239782 kernel: loop0: detected capacity change from 0 to 203944 Sep 16 04:37:52.215173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:37:52.217151 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:37:52.219920 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:37:52.222745 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:37:52.223968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:37:52.233585 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:37:52.234696 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:37:52.239760 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:37:52.242395 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:37:52.249342 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 16 04:37:52.249389 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 16 04:37:52.256137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:37:52.257772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:37:52.259572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:37:52.262798 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:37:52.278746 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:37:52.282563 kernel: loop1: detected capacity change from 0 to 119368 Sep 16 04:37:52.302734 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:37:52.305137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:37:52.313543 kernel: loop2: detected capacity change from 0 to 100632 Sep 16 04:37:52.331652 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 16 04:37:52.332047 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 16 04:37:52.335386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:37:52.339534 kernel: loop3: detected capacity change from 0 to 203944 Sep 16 04:37:52.344622 kernel: loop4: detected capacity change from 0 to 119368 Sep 16 04:37:52.350575 kernel: loop5: detected capacity change from 0 to 100632 Sep 16 04:37:52.355165 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:37:52.355577 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 16 04:37:52.359508 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:37:52.359539 systemd[1]: Reloading... Sep 16 04:37:52.425928 zram_generator::config[1253]: No configuration found. Sep 16 04:37:52.491452 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:37:52.572330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:37:52.572732 systemd[1]: Reloading finished in 212 ms. Sep 16 04:37:52.599468 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:37:52.600784 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:37:52.624728 systemd[1]: Starting ensure-sysext.service... Sep 16 04:37:52.626315 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:37:52.641772 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:37:52.641809 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:37:52.642028 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:37:52.642210 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:37:52.642267 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:37:52.642280 systemd[1]: Reloading... Sep 16 04:37:52.642837 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:37:52.643046 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 16 04:37:52.643097 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 16 04:37:52.645780 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:37:52.645793 systemd-tmpfiles[1285]: Skipping /boot Sep 16 04:37:52.651458 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:37:52.651477 systemd-tmpfiles[1285]: Skipping /boot Sep 16 04:37:52.686639 zram_generator::config[1312]: No configuration found. Sep 16 04:37:52.821145 systemd[1]: Reloading finished in 178 ms. Sep 16 04:37:52.845240 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:37:52.846636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:37:52.864654 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:37:52.867075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:37:52.869363 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:37:52.873736 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:37:52.876258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:37:52.878964 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:37:52.884513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:37:52.888548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:37:52.893763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:37:52.896192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:37:52.897360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:37:52.897492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:37:52.906771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:37:52.910976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:37:52.911239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:37:52.913715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:37:52.913926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:37:52.916051 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:37:52.917799 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:37:52.917975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:37:52.923812 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Sep 16 04:37:52.923905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:37:52.924075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:37:52.924154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:37:52.924224 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:37:52.924310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:37:52.927768 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:37:52.929435 augenrules[1382]: No rules Sep 16 04:37:52.931787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:37:52.933469 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:37:52.933878 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:37:52.952838 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:37:52.954317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:37:52.961385 systemd[1]: Finished ensure-sysext.service. Sep 16 04:37:52.964564 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:37:52.971217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:37:52.972758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:37:52.975714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:37:52.979835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:37:52.983748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:37:52.998303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:37:52.999764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:37:52.999816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:37:53.001569 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:37:53.004648 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:37:53.006597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:37:53.007135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:37:53.007320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:37:53.011919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:37:53.012135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:37:53.014281 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:37:53.014682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:37:53.020449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:37:53.033366 augenrules[1411]: /sbin/augenrules: No change Sep 16 04:37:53.034243 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:37:53.034451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:37:53.035920 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:37:53.042493 augenrules[1446]: No rules Sep 16 04:37:53.043431 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:37:53.045858 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:37:53.046072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:37:53.059816 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:37:53.101384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:37:53.108051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:37:53.136316 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:37:53.137785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:37:53.138333 systemd-networkd[1422]: lo: Link UP Sep 16 04:37:53.138345 systemd-networkd[1422]: lo: Gained carrier Sep 16 04:37:53.139290 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:37:53.139314 systemd-networkd[1422]: Enumeration completed Sep 16 04:37:53.139776 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:37:53.139795 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:37:53.140376 systemd-networkd[1422]: eth0: Link UP Sep 16 04:37:53.140499 systemd-networkd[1422]: eth0: Gained carrier Sep 16 04:37:53.140514 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:37:53.140534 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:37:53.142968 systemd-resolved[1351]: Positive Trust Anchors: Sep 16 04:37:53.142991 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:37:53.143022 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:37:53.143557 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:37:53.146807 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:37:53.154396 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 16 04:37:53.155618 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:37:53.155860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:37:53.157004 systemd[1]: Reached target network.target - Network. Sep 16 04:37:53.157699 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:37:53.158411 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Sep 16 04:37:53.158858 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:37:53.159593 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:37:53.159646 systemd-timesyncd[1423]: Initial clock synchronization to Tue 2025-09-16 04:37:52.995418 UTC. Sep 16 04:37:53.159990 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:37:53.161429 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:37:53.162689 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:37:53.163588 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:37:53.164489 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:37:53.165380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:37:53.165409 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:37:53.166135 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:37:53.167973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:37:53.170194 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:37:53.173441 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:37:53.174708 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:37:53.175661 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:37:53.179642 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:37:53.182325 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:37:53.186446 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:37:53.191748 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:37:53.193472 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:37:53.197242 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:37:53.199019 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:37:53.199138 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:37:53.201754 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:37:53.204634 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:37:53.209824 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:37:53.212259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:37:53.214789 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:37:53.216582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:37:53.217736 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:37:53.219591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:37:53.221677 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:37:53.234794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:37:53.239822 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:37:53.241492 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:37:53.241981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:37:53.243280 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:37:53.243747 jq[1494]: false Sep 16 04:37:53.246432 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:37:53.253367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:37:53.254884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:37:53.257601 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:37:53.258935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:37:53.259119 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:37:53.263539 jq[1505]: true Sep 16 04:37:53.269278 extend-filesystems[1495]: Found /dev/vda6 Sep 16 04:37:53.275042 extend-filesystems[1495]: Found /dev/vda9 Sep 16 04:37:53.280912 extend-filesystems[1495]: Checking size of /dev/vda9 Sep 16 04:37:53.278903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:37:53.284951 update_engine[1504]: I20250916 04:37:53.277930 1504 main.cc:92] Flatcar Update Engine starting Sep 16 04:37:53.285253 jq[1518]: true Sep 16 04:37:53.290363 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:37:53.290611 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:37:53.294292 dbus-daemon[1492]: [system] SELinux support is enabled Sep 16 04:37:53.294606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:37:53.298837 update_engine[1504]: I20250916 04:37:53.297712 1504 update_check_scheduler.cc:74] Next update check in 10m50s Sep 16 04:37:53.298902 tar[1512]: linux-arm64/helm Sep 16 04:37:53.298499 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:37:53.298570 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:37:53.299496 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:37:53.299530 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:37:53.299894 (ntainerd)[1519]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:37:53.300790 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:37:53.303228 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:37:53.314760 extend-filesystems[1495]: Resized partition /dev/vda9 Sep 16 04:37:53.317746 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:37:53.322532 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:37:53.370563 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:37:53.367594 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:37:53.382141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:37:53.383908 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Sep 16 04:37:53.384228 systemd-logind[1503]: New seat seat0. Sep 16 04:37:53.386195 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:37:53.386195 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:37:53.386195 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:37:53.385649 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:37:53.401646 bash[1553]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:37:53.401723 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Sep 16 04:37:53.386683 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:37:53.388826 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:37:53.393369 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:37:53.404510 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:37:53.477681 containerd[1519]: time="2025-09-16T04:37:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:37:53.479951 containerd[1519]: time="2025-09-16T04:37:53.479869440Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:37:53.495640 containerd[1519]: time="2025-09-16T04:37:53.495543320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.48µs" Sep 16 04:37:53.495640 containerd[1519]: time="2025-09-16T04:37:53.495634600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:37:53.495767 containerd[1519]: time="2025-09-16T04:37:53.495655520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:37:53.495925 containerd[1519]: time="2025-09-16T04:37:53.495898640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:37:53.495949 containerd[1519]: time="2025-09-16T04:37:53.495926200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:37:53.495967 containerd[1519]: time="2025-09-16T04:37:53.495953240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496089 containerd[1519]: time="2025-09-16T04:37:53.496068080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496113 containerd[1519]: time="2025-09-16T04:37:53.496087920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496456 containerd[1519]: time="2025-09-16T04:37:53.496428240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496495 containerd[1519]: time="2025-09-16T04:37:53.496455840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496495 containerd[1519]: time="2025-09-16T04:37:53.496469200Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496495 containerd[1519]: time="2025-09-16T04:37:53.496485400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:37:53.496678 containerd[1519]: time="2025-09-16T04:37:53.496653720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:37:53.497020 containerd[1519]: time="2025-09-16T04:37:53.496994160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:37:53.497062 containerd[1519]: time="2025-09-16T04:37:53.497044720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:37:53.497088 containerd[1519]: time="2025-09-16T04:37:53.497059800Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:37:53.497171 containerd[1519]: time="2025-09-16T04:37:53.497152800Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:37:53.497449 containerd[1519]: time="2025-09-16T04:37:53.497430640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:37:53.497623 containerd[1519]: time="2025-09-16T04:37:53.497515800Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:37:53.501994 containerd[1519]: time="2025-09-16T04:37:53.501953400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:37:53.502097 containerd[1519]: time="2025-09-16T04:37:53.502076800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:37:53.502120 containerd[1519]: time="2025-09-16T04:37:53.502102760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:37:53.502120 containerd[1519]: time="2025-09-16T04:37:53.502117040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:37:53.502194 containerd[1519]: time="2025-09-16T04:37:53.502178760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:37:53.502256 containerd[1519]: time="2025-09-16T04:37:53.502199640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:37:53.502282 containerd[1519]: time="2025-09-16T04:37:53.502261960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:37:53.502282 containerd[1519]: time="2025-09-16T04:37:53.502277720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:37:53.502315 containerd[1519]: time="2025-09-16T04:37:53.502294720Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:37:53.502338 containerd[1519]: time="2025-09-16T04:37:53.502308360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:37:53.502338 containerd[1519]: time="2025-09-16T04:37:53.502330480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:37:53.502371 containerd[1519]: time="2025-09-16T04:37:53.502344840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:37:53.502654 containerd[1519]: time="2025-09-16T04:37:53.502578080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:37:53.502678 containerd[1519]: time="2025-09-16T04:37:53.502665200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:37:53.502696 containerd[1519]: time="2025-09-16T04:37:53.502684720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:37:53.502714 containerd[1519]: time="2025-09-16T04:37:53.502696720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:37:53.502783 containerd[1519]: time="2025-09-16T04:37:53.502763840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:37:53.502807 containerd[1519]: time="2025-09-16T04:37:53.502784400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:37:53.502807 containerd[1519]: time="2025-09-16T04:37:53.502798920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:37:53.502846 containerd[1519]: time="2025-09-16T04:37:53.502810040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:37:53.502846 containerd[1519]: time="2025-09-16T04:37:53.502832080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:37:53.502846 containerd[1519]: time="2025-09-16T04:37:53.502844560Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:37:53.502898 containerd[1519]: time="2025-09-16T04:37:53.502856520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:37:53.503215 containerd[1519]: time="2025-09-16T04:37:53.503145520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:37:53.503243 containerd[1519]: time="2025-09-16T04:37:53.503231520Z" level=info msg="Start snapshots syncer" Sep 16 04:37:53.503493 containerd[1519]: time="2025-09-16T04:37:53.503460800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:37:53.505717 containerd[1519]: time="2025-09-16T04:37:53.505660840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:37:53.505826 containerd[1519]: time="2025-09-16T04:37:53.505793160Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:37:53.506017 containerd[1519]: time="2025-09-16T04:37:53.505993680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:37:53.506364 containerd[1519]: time="2025-09-16T04:37:53.506339520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:37:53.506461 containerd[1519]: time="2025-09-16T04:37:53.506442280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:37:53.506507 containerd[1519]: time="2025-09-16T04:37:53.506465240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:37:53.506573 containerd[1519]: time="2025-09-16T04:37:53.506485240Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:37:53.506596 containerd[1519]: time="2025-09-16T04:37:53.506580280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:37:53.506621 containerd[1519]: time="2025-09-16T04:37:53.506596720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:37:53.506770 containerd[1519]: time="2025-09-16T04:37:53.506748320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:37:53.506934 containerd[1519]: time="2025-09-16T04:37:53.506913040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:37:53.506958 containerd[1519]: time="2025-09-16T04:37:53.506939760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:37:53.507017 containerd[1519]: time="2025-09-16T04:37:53.506953640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:37:53.507127 containerd[1519]: time="2025-09-16T04:37:53.507107720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:37:53.507153 containerd[1519]: time="2025-09-16T04:37:53.507134000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:37:53.507153 containerd[1519]: time="2025-09-16T04:37:53.507145040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:37:53.507251 containerd[1519]: time="2025-09-16T04:37:53.507231480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:37:53.507278 containerd[1519]: time="2025-09-16T04:37:53.507250600Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:37:53.507332 containerd[1519]: time="2025-09-16T04:37:53.507314640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:37:53.507357 containerd[1519]: time="2025-09-16T04:37:53.507334560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:37:53.507526 containerd[1519]: time="2025-09-16T04:37:53.507499440Z" level=info msg="runtime interface created" Sep 16 04:37:53.507609 containerd[1519]: time="2025-09-16T04:37:53.507515120Z" level=info msg="created NRI interface" Sep 16 04:37:53.507632 containerd[1519]: time="2025-09-16T04:37:53.507613000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:37:53.507659 containerd[1519]: time="2025-09-16T04:37:53.507646200Z" level=info msg="Connect containerd service" Sep 16 04:37:53.507703 containerd[1519]: time="2025-09-16T04:37:53.507689200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:37:53.509210 containerd[1519]: time="2025-09-16T04:37:53.509181440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:37:53.575288 containerd[1519]: time="2025-09-16T04:37:53.575220640Z" level=info msg="Start subscribing containerd event" Sep 16 04:37:53.575379 containerd[1519]: time="2025-09-16T04:37:53.575301120Z" level=info msg="Start recovering state" Sep 16 04:37:53.575398 containerd[1519]: time="2025-09-16T04:37:53.575388920Z" level=info msg="Start event monitor" Sep 16 04:37:53.575416 containerd[1519]: time="2025-09-16T04:37:53.575404360Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:37:53.575434 containerd[1519]: time="2025-09-16T04:37:53.575415560Z" level=info msg="Start streaming server" Sep 16 04:37:53.575434 containerd[1519]: time="2025-09-16T04:37:53.575425120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:37:53.575434 containerd[1519]: time="2025-09-16T04:37:53.575432280Z" level=info msg="runtime interface starting up..." Sep 16 04:37:53.575527 containerd[1519]: time="2025-09-16T04:37:53.575438000Z" level=info msg="starting plugins..." Sep 16 04:37:53.575527 containerd[1519]: time="2025-09-16T04:37:53.575462840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:37:53.575617 containerd[1519]: time="2025-09-16T04:37:53.575588280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:37:53.575658 containerd[1519]: time="2025-09-16T04:37:53.575642960Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:37:53.575716 containerd[1519]: time="2025-09-16T04:37:53.575703360Z" level=info msg="containerd successfully booted in 0.098447s" Sep 16 04:37:53.575814 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:37:53.621633 tar[1512]: linux-arm64/LICENSE Sep 16 04:37:53.621720 tar[1512]: linux-arm64/README.md Sep 16 04:37:53.638726 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:37:53.847408 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:37:53.867720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:37:53.871272 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:37:53.890422 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:37:53.890722 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:37:53.893183 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:37:53.913958 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:37:53.916658 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:37:53.918741 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:37:53.919979 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:37:54.693653 systemd-networkd[1422]: eth0: Gained IPv6LL Sep 16 04:37:54.697560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:37:54.698881 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:37:54.700923 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:37:54.702954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:37:54.704925 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:37:54.732898 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:37:54.737097 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:37:54.737311 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:37:54.739410 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:37:55.250685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:37:55.251992 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:37:55.254737 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:37:55.257208 systemd[1]: Startup finished in 2.026s (kernel) + 5.962s (initrd) + 3.741s (userspace) = 11.730s. Sep 16 04:37:55.605351 kubelet[1631]: E0916 04:37:55.605290 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:37:55.607710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:37:55.607846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:37:55.608154 systemd[1]: kubelet.service: Consumed 777ms CPU time, 255.8M memory peak. Sep 16 04:37:58.615051 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:37:58.616101 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:56960.service - OpenSSH per-connection server daemon (10.0.0.1:56960). Sep 16 04:37:58.743376 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 56960 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:58.745215 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:58.751058 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:37:58.751983 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:37:58.757253 systemd-logind[1503]: New session 1 of user core. Sep 16 04:37:58.772550 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:37:58.775156 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:37:58.788606 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:37:58.791113 systemd-logind[1503]: New session c1 of user core. Sep 16 04:37:58.906181 systemd[1649]: Queued start job for default target default.target. Sep 16 04:37:58.912420 systemd[1649]: Created slice app.slice - User Application Slice. Sep 16 04:37:58.912450 systemd[1649]: Reached target paths.target - Paths. Sep 16 04:37:58.912486 systemd[1649]: Reached target timers.target - Timers. Sep 16 04:37:58.913649 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:37:58.923344 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:37:58.923414 systemd[1649]: Reached target sockets.target - Sockets. Sep 16 04:37:58.923456 systemd[1649]: Reached target basic.target - Basic System. Sep 16 04:37:58.923483 systemd[1649]: Reached target default.target - Main User Target. Sep 16 04:37:58.923555 systemd[1649]: Startup finished in 126ms. Sep 16 04:37:58.923643 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:37:58.924914 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:37:58.984683 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:56964.service - OpenSSH per-connection server daemon (10.0.0.1:56964). Sep 16 04:37:59.046596 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 56964 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.047738 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.051931 systemd-logind[1503]: New session 2 of user core. Sep 16 04:37:59.062686 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:37:59.114568 sshd[1663]: Connection closed by 10.0.0.1 port 56964 Sep 16 04:37:59.115039 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 16 04:37:59.130589 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:56964.service: Deactivated successfully. Sep 16 04:37:59.133836 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:37:59.134456 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:37:59.136733 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:56974.service - OpenSSH per-connection server daemon (10.0.0.1:56974). Sep 16 04:37:59.137665 systemd-logind[1503]: Removed session 2. Sep 16 04:37:59.196458 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 56974 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.197779 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.202439 systemd-logind[1503]: New session 3 of user core. Sep 16 04:37:59.211714 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:37:59.258564 sshd[1672]: Connection closed by 10.0.0.1 port 56974 Sep 16 04:37:59.259138 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Sep 16 04:37:59.270611 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:56974.service: Deactivated successfully. Sep 16 04:37:59.273871 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:37:59.274496 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:37:59.276827 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:56986.service - OpenSSH per-connection server daemon (10.0.0.1:56986). Sep 16 04:37:59.277744 systemd-logind[1503]: Removed session 3. Sep 16 04:37:59.337911 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 56986 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.339243 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.344411 systemd-logind[1503]: New session 4 of user core. Sep 16 04:37:59.362719 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:37:59.413833 sshd[1681]: Connection closed by 10.0.0.1 port 56986 Sep 16 04:37:59.414278 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Sep 16 04:37:59.428373 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:56986.service: Deactivated successfully. Sep 16 04:37:59.430778 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:37:59.432978 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:37:59.436745 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:56988.service - OpenSSH per-connection server daemon (10.0.0.1:56988). Sep 16 04:37:59.437884 systemd-logind[1503]: Removed session 4. Sep 16 04:37:59.491160 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 56988 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.492431 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.496351 systemd-logind[1503]: New session 5 of user core. Sep 16 04:37:59.505695 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:37:59.561165 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:37:59.561428 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:37:59.574421 sudo[1691]: pam_unix(sudo:session): session closed for user root Sep 16 04:37:59.575959 sshd[1690]: Connection closed by 10.0.0.1 port 56988 Sep 16 04:37:59.576467 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 16 04:37:59.604767 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:56988.service: Deactivated successfully. Sep 16 04:37:59.606193 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:37:59.606928 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:37:59.610989 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:57004.service - OpenSSH per-connection server daemon (10.0.0.1:57004). Sep 16 04:37:59.611425 systemd-logind[1503]: Removed session 5. Sep 16 04:37:59.671878 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 57004 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.673153 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.677775 systemd-logind[1503]: New session 6 of user core. Sep 16 04:37:59.687668 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:37:59.738908 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:37:59.739503 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:37:59.744957 sudo[1702]: pam_unix(sudo:session): session closed for user root Sep 16 04:37:59.750297 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:37:59.750592 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:37:59.759026 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:37:59.804727 augenrules[1724]: No rules Sep 16 04:37:59.806041 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:37:59.806261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:37:59.807604 sudo[1701]: pam_unix(sudo:session): session closed for user root Sep 16 04:37:59.810267 sshd[1700]: Connection closed by 10.0.0.1 port 57004 Sep 16 04:37:59.810590 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Sep 16 04:37:59.822446 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:57004.service: Deactivated successfully. Sep 16 04:37:59.823878 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:37:59.825593 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:37:59.826439 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:57010.service - OpenSSH per-connection server daemon (10.0.0.1:57010). Sep 16 04:37:59.827467 systemd-logind[1503]: Removed session 6. Sep 16 04:37:59.883993 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:37:59.885341 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:37:59.889237 systemd-logind[1503]: New session 7 of user core. Sep 16 04:37:59.898734 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:37:59.950677 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:37:59.950944 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:38:00.219454 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:38:00.239861 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:38:00.435210 dockerd[1757]: time="2025-09-16T04:38:00.435149700Z" level=info msg="Starting up" Sep 16 04:38:00.435977 dockerd[1757]: time="2025-09-16T04:38:00.435956603Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:38:00.445439 dockerd[1757]: time="2025-09-16T04:38:00.445387664Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:38:00.545256 dockerd[1757]: time="2025-09-16T04:38:00.545207286Z" level=info msg="Loading containers: start." Sep 16 04:38:00.554548 kernel: Initializing XFRM netlink socket Sep 16 04:38:00.731326 systemd-networkd[1422]: docker0: Link UP Sep 16 04:38:00.734184 dockerd[1757]: time="2025-09-16T04:38:00.734147055Z" level=info msg="Loading containers: done." Sep 16 04:38:00.747942 dockerd[1757]: time="2025-09-16T04:38:00.747891363Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:38:00.748065 dockerd[1757]: time="2025-09-16T04:38:00.747968764Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:38:00.748065 dockerd[1757]: time="2025-09-16T04:38:00.748050564Z" level=info msg="Initializing buildkit" Sep 16 04:38:00.769912 dockerd[1757]: time="2025-09-16T04:38:00.769870564Z" level=info msg="Completed buildkit initialization" Sep 16 04:38:00.775339 dockerd[1757]: time="2025-09-16T04:38:00.774945337Z" level=info msg="Daemon has completed initialization" Sep 16 04:38:00.775339 dockerd[1757]: time="2025-09-16T04:38:00.775102992Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:38:00.775191 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:38:01.365659 containerd[1519]: time="2025-09-16T04:38:01.365622743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:38:01.906474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998115948.mount: Deactivated successfully. Sep 16 04:38:03.243665 containerd[1519]: time="2025-09-16T04:38:03.243610373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:03.244749 containerd[1519]: time="2025-09-16T04:38:03.244634538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 16 04:38:03.245555 containerd[1519]: time="2025-09-16T04:38:03.245530221Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:03.248166 containerd[1519]: time="2025-09-16T04:38:03.248117328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:03.249636 containerd[1519]: time="2025-09-16T04:38:03.249604779Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.883937807s" Sep 16 04:38:03.249692 containerd[1519]: time="2025-09-16T04:38:03.249646718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 16 04:38:03.250829 containerd[1519]: time="2025-09-16T04:38:03.250794278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:38:04.779198 containerd[1519]: time="2025-09-16T04:38:04.778403076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:04.780342 containerd[1519]: time="2025-09-16T04:38:04.780318206Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 16 04:38:04.781211 containerd[1519]: time="2025-09-16T04:38:04.781186575Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:04.786619 containerd[1519]: time="2025-09-16T04:38:04.786567742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:04.787505 containerd[1519]: time="2025-09-16T04:38:04.787482659Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.536658638s" Sep 16 04:38:04.787703 containerd[1519]: time="2025-09-16T04:38:04.787591667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 16 04:38:04.788149 containerd[1519]: time="2025-09-16T04:38:04.788092749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:38:05.774492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:38:05.777696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:38:05.954051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:05.957766 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:38:06.140085 containerd[1519]: time="2025-09-16T04:38:06.139974614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:06.140887 containerd[1519]: time="2025-09-16T04:38:06.140708051Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 16 04:38:06.141497 containerd[1519]: time="2025-09-16T04:38:06.141476822Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:06.144586 containerd[1519]: time="2025-09-16T04:38:06.144542943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:06.145961 containerd[1519]: time="2025-09-16T04:38:06.145924691Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.357806589s" Sep 16 04:38:06.146025 containerd[1519]: time="2025-09-16T04:38:06.145963211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 16 04:38:06.146530 containerd[1519]: time="2025-09-16T04:38:06.146394422Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:38:06.165624 kubelet[2054]: E0916 04:38:06.165578 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:38:06.168651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:38:06.168778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:38:06.170808 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Sep 16 04:38:07.116083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691318986.mount: Deactivated successfully. Sep 16 04:38:07.341245 containerd[1519]: time="2025-09-16T04:38:07.341202413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:07.342602 containerd[1519]: time="2025-09-16T04:38:07.342565268Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 16 04:38:07.343133 containerd[1519]: time="2025-09-16T04:38:07.343069041Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:07.345104 containerd[1519]: time="2025-09-16T04:38:07.345048539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:07.345715 containerd[1519]: time="2025-09-16T04:38:07.345530790Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.199092418s" Sep 16 04:38:07.345715 containerd[1519]: time="2025-09-16T04:38:07.345567576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 16 04:38:07.346103 containerd[1519]: time="2025-09-16T04:38:07.346057160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:38:07.827200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180226196.mount: Deactivated successfully. Sep 16 04:38:08.606062 containerd[1519]: time="2025-09-16T04:38:08.606008649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:08.606743 containerd[1519]: time="2025-09-16T04:38:08.606717201Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 16 04:38:08.607941 containerd[1519]: time="2025-09-16T04:38:08.607874927Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:08.611534 containerd[1519]: time="2025-09-16T04:38:08.611412503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:08.613689 containerd[1519]: time="2025-09-16T04:38:08.613661327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.267567058s" Sep 16 04:38:08.613689 containerd[1519]: time="2025-09-16T04:38:08.613693386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:38:08.614335 containerd[1519]: time="2025-09-16T04:38:08.614090845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:38:09.021078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476822340.mount: Deactivated successfully. Sep 16 04:38:09.024454 containerd[1519]: time="2025-09-16T04:38:09.024404622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:38:09.024841 containerd[1519]: time="2025-09-16T04:38:09.024817237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 16 04:38:09.025640 containerd[1519]: time="2025-09-16T04:38:09.025593363Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:38:09.028170 containerd[1519]: time="2025-09-16T04:38:09.027965261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:38:09.028614 containerd[1519]: time="2025-09-16T04:38:09.028592839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 414.406175ms" Sep 16 04:38:09.028655 containerd[1519]: time="2025-09-16T04:38:09.028620403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:38:09.029155 containerd[1519]: time="2025-09-16T04:38:09.029104419Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:38:09.514640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169661100.mount: Deactivated successfully. Sep 16 04:38:11.548550 containerd[1519]: time="2025-09-16T04:38:11.548307714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:11.549829 containerd[1519]: time="2025-09-16T04:38:11.549673095Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 16 04:38:11.550853 containerd[1519]: time="2025-09-16T04:38:11.550823492Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:11.553983 containerd[1519]: time="2025-09-16T04:38:11.553938917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:11.555509 containerd[1519]: time="2025-09-16T04:38:11.555372673Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.52623845s" Sep 16 04:38:11.555509 containerd[1519]: time="2025-09-16T04:38:11.555407838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 16 04:38:15.717101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:15.717246 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Sep 16 04:38:15.719141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:38:15.741696 systemd[1]: Reload requested from client PID 2210 ('systemctl') (unit session-7.scope)... Sep 16 04:38:15.741708 systemd[1]: Reloading... Sep 16 04:38:15.807543 zram_generator::config[2252]: No configuration found. Sep 16 04:38:15.968282 systemd[1]: Reloading finished in 226 ms. Sep 16 04:38:16.029149 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:38:16.029236 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:38:16.029542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:16.029592 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.1M memory peak. Sep 16 04:38:16.031191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:38:16.140786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:16.145071 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:38:16.181039 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:38:16.181039 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:38:16.181039 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:38:16.181371 kubelet[2299]: I0916 04:38:16.181073 2299 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:38:17.054572 kubelet[2299]: I0916 04:38:17.053842 2299 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:38:17.054572 kubelet[2299]: I0916 04:38:17.053908 2299 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:38:17.054572 kubelet[2299]: I0916 04:38:17.054144 2299 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:38:17.075547 kubelet[2299]: E0916 04:38:17.075473 2299 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:38:17.076724 kubelet[2299]: I0916 04:38:17.076696 2299 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:38:17.085122 kubelet[2299]: I0916 04:38:17.085087 2299 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:38:17.088932 kubelet[2299]: I0916 04:38:17.088790 2299 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:38:17.089693 kubelet[2299]: I0916 04:38:17.089669 2299 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:38:17.089942 kubelet[2299]: I0916 04:38:17.089908 2299 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:38:17.090198 kubelet[2299]: I0916 04:38:17.090006 2299 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:38:17.090387 kubelet[2299]: I0916 04:38:17.090374 2299 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:38:17.090436 kubelet[2299]: I0916 04:38:17.090429 2299 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:38:17.090720 kubelet[2299]: I0916 04:38:17.090705 2299 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:38:17.092655 kubelet[2299]: I0916 04:38:17.092632 2299 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:38:17.092754 kubelet[2299]: I0916 04:38:17.092743 2299 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:38:17.092815 kubelet[2299]: I0916 04:38:17.092807 2299 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:38:17.092983 kubelet[2299]: I0916 04:38:17.092973 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:38:17.094847 kubelet[2299]: W0916 04:38:17.094782 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 16 04:38:17.094895 kubelet[2299]: E0916 04:38:17.094865 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:38:17.095801 kubelet[2299]: W0916 04:38:17.095706 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 16 04:38:17.095801 kubelet[2299]: E0916 04:38:17.095749 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.119:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:38:17.096679 kubelet[2299]: I0916 04:38:17.096664 2299 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:38:17.097655 kubelet[2299]: I0916 04:38:17.097630 2299 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:38:17.097853 kubelet[2299]: W0916 04:38:17.097838 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:38:17.098841 kubelet[2299]: I0916 04:38:17.098813 2299 server.go:1274] "Started kubelet" Sep 16 04:38:17.099762 kubelet[2299]: I0916 04:38:17.099672 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:38:17.099835 kubelet[2299]: I0916 04:38:17.099706 2299 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:38:17.099972 kubelet[2299]: I0916 04:38:17.099951 2299 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:38:17.100857 kubelet[2299]: I0916 04:38:17.100260 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:38:17.100857 kubelet[2299]: I0916 04:38:17.100339 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:38:17.100934 kubelet[2299]: I0916 04:38:17.100918 2299 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:38:17.105683 kubelet[2299]: E0916 04:38:17.105658 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:38:17.105791 kubelet[2299]: I0916 04:38:17.105756 2299 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:38:17.106048 kubelet[2299]: I0916 04:38:17.106010 2299 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:38:17.106093 kubelet[2299]: I0916 04:38:17.106072 2299 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:38:17.106794 kubelet[2299]: W0916 04:38:17.106755 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 16 04:38:17.106838 kubelet[2299]: E0916 04:38:17.106804 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:38:17.107560 kubelet[2299]: I0916 04:38:17.107504 2299 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:38:17.107662 kubelet[2299]: I0916 04:38:17.107635 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:38:17.107833 kubelet[2299]: E0916 04:38:17.107791 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Sep 16 04:38:17.108308 kubelet[2299]: E0916 04:38:17.108275 2299 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:38:17.109537 kubelet[2299]: I0916 04:38:17.109104 2299 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:38:17.109870 kubelet[2299]: E0916 04:38:17.108907 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865a968ca9972e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:38:17.098793703 +0000 UTC m=+0.950752113,LastTimestamp:2025-09-16 04:38:17.098793703 +0000 UTC m=+0.950752113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:38:17.121429 kubelet[2299]: I0916 04:38:17.121381 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:38:17.123038 kubelet[2299]: I0916 04:38:17.122902 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:38:17.123038 kubelet[2299]: I0916 04:38:17.122915 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:38:17.123038 kubelet[2299]: I0916 04:38:17.122947 2299 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:38:17.123133 kubelet[2299]: I0916 04:38:17.123046 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:38:17.123133 kubelet[2299]: I0916 04:38:17.123067 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:38:17.123133 kubelet[2299]: I0916 04:38:17.123083 2299 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:38:17.123133 kubelet[2299]: E0916 04:38:17.123121 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:38:17.124779 kubelet[2299]: I0916 04:38:17.124756 2299 policy_none.go:49] "None policy: Start" Sep 16 04:38:17.125235 kubelet[2299]: W0916 04:38:17.125193 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 16 04:38:17.125269 kubelet[2299]: E0916 04:38:17.125233 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.119:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:38:17.125861 kubelet[2299]: I0916 04:38:17.125838 2299 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:38:17.125861 kubelet[2299]: I0916 04:38:17.125865 2299 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:38:17.131072 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:38:17.141548 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:38:17.144622 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:38:17.163489 kubelet[2299]: I0916 04:38:17.163462 2299 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:38:17.163802 kubelet[2299]: I0916 04:38:17.163782 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:38:17.163897 kubelet[2299]: I0916 04:38:17.163866 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:38:17.164390 kubelet[2299]: I0916 04:38:17.164358 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:38:17.165197 kubelet[2299]: E0916 04:38:17.165175 2299 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:38:17.232662 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 16 04:38:17.264838 kubelet[2299]: I0916 04:38:17.264802 2299 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:38:17.265695 kubelet[2299]: E0916 04:38:17.265652 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 16 04:38:17.266550 systemd[1]: Created slice kubepods-burstable-pode3f07987b26304f10e16079125ccd355.slice - libcontainer container kubepods-burstable-pode3f07987b26304f10e16079125ccd355.slice. Sep 16 04:38:17.270778 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 16 04:38:17.307596 kubelet[2299]: I0916 04:38:17.307453 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:17.307596 kubelet[2299]: I0916 04:38:17.307489 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:17.307596 kubelet[2299]: I0916 04:38:17.307506 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:17.307596 kubelet[2299]: I0916 04:38:17.307534 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:17.307596 kubelet[2299]: I0916 04:38:17.307554 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:38:17.307774 kubelet[2299]: I0916 04:38:17.307568 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:17.307774 kubelet[2299]: I0916 04:38:17.307582 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:17.307774 kubelet[2299]: I0916 04:38:17.307764 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:17.307828 kubelet[2299]: I0916 04:38:17.307781 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:17.309143 kubelet[2299]: E0916 04:38:17.309100 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Sep 16 04:38:17.467074 kubelet[2299]: I0916 04:38:17.467040 2299 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:38:17.467397 kubelet[2299]: E0916 04:38:17.467360 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 16 04:38:17.563898 kubelet[2299]: E0916 04:38:17.563775 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.564571 containerd[1519]: time="2025-09-16T04:38:17.564459253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:17.569104 kubelet[2299]: E0916 04:38:17.569084 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.569397 containerd[1519]: time="2025-09-16T04:38:17.569364584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3f07987b26304f10e16079125ccd355,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:17.573148 kubelet[2299]: E0916 04:38:17.573113 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.573598 containerd[1519]: time="2025-09-16T04:38:17.573390712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:17.585709 containerd[1519]: time="2025-09-16T04:38:17.585672421Z" level=info msg="connecting to shim 6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b" address="unix:///run/containerd/s/c86b2a5a9c610c9abe745ebe4723f9713cfb84190c5fc98259fdc6729d897176" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:17.596562 containerd[1519]: time="2025-09-16T04:38:17.596254429Z" level=info msg="connecting to shim a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b" address="unix:///run/containerd/s/47bc50d4826e9e8b939ba20ac42dcc890b53866c8071bd5eb3e4c6dd8af139b7" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:17.607097 containerd[1519]: time="2025-09-16T04:38:17.607055148Z" level=info msg="connecting to shim befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73" address="unix:///run/containerd/s/aabdc3a4b719fb0b83927f803fbdc14d4dae479f96e655f050f53ec6b304bb36" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:17.617668 systemd[1]: Started cri-containerd-6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b.scope - libcontainer container 6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b. Sep 16 04:38:17.620095 systemd[1]: Started cri-containerd-a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b.scope - libcontainer container a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b. Sep 16 04:38:17.636694 systemd[1]: Started cri-containerd-befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73.scope - libcontainer container befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73. Sep 16 04:38:17.675119 containerd[1519]: time="2025-09-16T04:38:17.674987526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3f07987b26304f10e16079125ccd355,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b\"" Sep 16 04:38:17.677963 kubelet[2299]: E0916 04:38:17.677886 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.680579 containerd[1519]: time="2025-09-16T04:38:17.680534046Z" level=info msg="CreateContainer within sandbox \"a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:38:17.681976 containerd[1519]: time="2025-09-16T04:38:17.681935033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b\"" Sep 16 04:38:17.682652 kubelet[2299]: E0916 04:38:17.682628 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.685208 containerd[1519]: time="2025-09-16T04:38:17.685177506Z" level=info msg="CreateContainer within sandbox \"6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:38:17.690104 containerd[1519]: time="2025-09-16T04:38:17.690055863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73\"" Sep 16 04:38:17.690801 kubelet[2299]: E0916 04:38:17.690780 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:17.695339 containerd[1519]: time="2025-09-16T04:38:17.694761703Z" level=info msg="CreateContainer within sandbox \"befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:38:17.695339 containerd[1519]: time="2025-09-16T04:38:17.695103178Z" level=info msg="Container 3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:17.695771 containerd[1519]: time="2025-09-16T04:38:17.695748244Z" level=info msg="Container d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:17.704774 containerd[1519]: time="2025-09-16T04:38:17.704735410Z" level=info msg="CreateContainer within sandbox \"a04d99ca5931d14ce5ba23ad5ea1a0fea8abbb657feb75b4b9fe4ae4e75ae45b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d\"" Sep 16 04:38:17.705355 containerd[1519]: time="2025-09-16T04:38:17.705328965Z" level=info msg="StartContainer for \"d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d\"" Sep 16 04:38:17.706111 containerd[1519]: time="2025-09-16T04:38:17.706077932Z" level=info msg="CreateContainer within sandbox \"6b35b3bab504bf4bdb3361c2e0118364d240dd0fed4e1ca09965c422c301720b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65\"" Sep 16 04:38:17.706634 containerd[1519]: time="2025-09-16T04:38:17.706604910Z" level=info msg="StartContainer for \"3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65\"" Sep 16 04:38:17.706934 containerd[1519]: time="2025-09-16T04:38:17.706909061Z" level=info msg="connecting to shim d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d" address="unix:///run/containerd/s/47bc50d4826e9e8b939ba20ac42dcc890b53866c8071bd5eb3e4c6dd8af139b7" protocol=ttrpc version=3 Sep 16 04:38:17.707119 containerd[1519]: time="2025-09-16T04:38:17.707092646Z" level=info msg="Container c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:17.707629 containerd[1519]: time="2025-09-16T04:38:17.707604039Z" level=info msg="connecting to shim 3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65" address="unix:///run/containerd/s/c86b2a5a9c610c9abe745ebe4723f9713cfb84190c5fc98259fdc6729d897176" protocol=ttrpc version=3 Sep 16 04:38:17.710853 kubelet[2299]: E0916 04:38:17.710026 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Sep 16 04:38:17.715177 containerd[1519]: time="2025-09-16T04:38:17.715123761Z" level=info msg="CreateContainer within sandbox \"befa01e4d7625cfc80af5581cb3b902239cdcc55437325deed9f673959795f73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e\"" Sep 16 04:38:17.715577 containerd[1519]: time="2025-09-16T04:38:17.715555870Z" level=info msg="StartContainer for \"c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e\"" Sep 16 04:38:17.716763 containerd[1519]: time="2025-09-16T04:38:17.716734748Z" level=info msg="connecting to shim c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e" address="unix:///run/containerd/s/aabdc3a4b719fb0b83927f803fbdc14d4dae479f96e655f050f53ec6b304bb36" protocol=ttrpc version=3 Sep 16 04:38:17.728694 systemd[1]: Started cri-containerd-3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65.scope - libcontainer container 3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65. Sep 16 04:38:17.730077 systemd[1]: Started cri-containerd-d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d.scope - libcontainer container d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d. Sep 16 04:38:17.733476 systemd[1]: Started cri-containerd-c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e.scope - libcontainer container c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e. Sep 16 04:38:17.776371 containerd[1519]: time="2025-09-16T04:38:17.776316714Z" level=info msg="StartContainer for \"3507c6054037ef5be2fb44e9e02fe70f56a9fe2b52222b57a8b70a565b1c8a65\" returns successfully" Sep 16 04:38:17.779539 containerd[1519]: time="2025-09-16T04:38:17.779482621Z" level=info msg="StartContainer for \"d366d972975092432ae98dc9faf996aa88b1e16535f6b29b9edfb0607078d63d\" returns successfully" Sep 16 04:38:17.786091 containerd[1519]: time="2025-09-16T04:38:17.786060439Z" level=info msg="StartContainer for \"c6a3702239e1cf1a52a46b3f96187d9eda14803a0799993afe41f6b0946f266e\" returns successfully" Sep 16 04:38:17.870789 kubelet[2299]: I0916 04:38:17.870623 2299 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:38:18.133269 kubelet[2299]: E0916 04:38:18.132923 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:18.135943 kubelet[2299]: E0916 04:38:18.135915 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:18.137762 kubelet[2299]: E0916 04:38:18.137714 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:19.140653 kubelet[2299]: E0916 04:38:19.140607 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:19.621544 kubelet[2299]: E0916 04:38:19.620877 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 16 04:38:19.711085 kubelet[2299]: I0916 04:38:19.711031 2299 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:38:19.711085 kubelet[2299]: E0916 04:38:19.711071 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 16 04:38:19.757296 kubelet[2299]: E0916 04:38:19.757193 2299 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1865a968ca9972e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:38:17.098793703 +0000 UTC m=+0.950752113,LastTimestamp:2025-09-16 04:38:17.098793703 +0000 UTC m=+0.950752113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:38:20.096576 kubelet[2299]: I0916 04:38:20.096300 2299 apiserver.go:52] "Watching apiserver" Sep 16 04:38:20.107027 kubelet[2299]: I0916 04:38:20.106975 2299 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:38:21.387125 kubelet[2299]: E0916 04:38:21.387091 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:21.585263 systemd[1]: Reload requested from client PID 2575 ('systemctl') (unit session-7.scope)... Sep 16 04:38:21.585278 systemd[1]: Reloading... Sep 16 04:38:21.651582 zram_generator::config[2618]: No configuration found. Sep 16 04:38:21.888056 systemd[1]: Reloading finished in 302 ms. Sep 16 04:38:21.921967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:38:21.934441 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:38:21.934722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:21.934777 systemd[1]: kubelet.service: Consumed 1.330s CPU time, 128.7M memory peak. Sep 16 04:38:21.936407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:38:22.071600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:38:22.081842 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:38:22.122119 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:38:22.122119 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:38:22.122119 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:38:22.122474 kubelet[2660]: I0916 04:38:22.122182 2660 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:38:22.127610 kubelet[2660]: I0916 04:38:22.127570 2660 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:38:22.127610 kubelet[2660]: I0916 04:38:22.127599 2660 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:38:22.127829 kubelet[2660]: I0916 04:38:22.127801 2660 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:38:22.129169 kubelet[2660]: I0916 04:38:22.129141 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:38:22.131806 kubelet[2660]: I0916 04:38:22.131692 2660 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:38:22.136805 kubelet[2660]: I0916 04:38:22.136775 2660 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:38:22.139233 kubelet[2660]: I0916 04:38:22.139155 2660 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:38:22.139643 kubelet[2660]: I0916 04:38:22.139286 2660 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:38:22.139643 kubelet[2660]: I0916 04:38:22.139375 2660 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:38:22.139643 kubelet[2660]: I0916 04:38:22.139400 2660 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:38:22.139643 kubelet[2660]: I0916 04:38:22.139580 2660 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139589 2660 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139625 2660 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139709 2660 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139720 2660 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139736 2660 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:38:22.140332 kubelet[2660]: I0916 04:38:22.139748 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:38:22.141606 kubelet[2660]: I0916 04:38:22.141567 2660 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:38:22.142033 kubelet[2660]: I0916 04:38:22.142015 2660 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:38:22.142427 kubelet[2660]: I0916 04:38:22.142408 2660 server.go:1274] "Started kubelet" Sep 16 04:38:22.143566 kubelet[2660]: I0916 04:38:22.143542 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:38:22.143771 kubelet[2660]: I0916 04:38:22.143719 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:38:22.143945 kubelet[2660]: I0916 04:38:22.143929 2660 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:38:22.144009 kubelet[2660]: I0916 04:38:22.143981 2660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:38:22.146260 kubelet[2660]: I0916 04:38:22.146225 2660 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:38:22.147167 kubelet[2660]: I0916 04:38:22.147139 2660 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:38:22.147241 kubelet[2660]: I0916 04:38:22.147226 2660 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:38:22.147334 kubelet[2660]: I0916 04:38:22.147321 2660 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:38:22.149377 kubelet[2660]: I0916 04:38:22.149351 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:38:22.154268 kubelet[2660]: I0916 04:38:22.154211 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:38:22.155217 kubelet[2660]: E0916 04:38:22.154998 2660 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:38:22.158626 kubelet[2660]: I0916 04:38:22.158593 2660 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:38:22.158626 kubelet[2660]: I0916 04:38:22.158616 2660 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:38:22.164728 kubelet[2660]: I0916 04:38:22.164695 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:38:22.167773 kubelet[2660]: I0916 04:38:22.167747 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:38:22.167773 kubelet[2660]: I0916 04:38:22.167774 2660 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:38:22.167883 kubelet[2660]: I0916 04:38:22.167792 2660 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:38:22.167883 kubelet[2660]: E0916 04:38:22.167839 2660 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:38:22.192813 kubelet[2660]: I0916 04:38:22.192779 2660 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:38:22.192813 kubelet[2660]: I0916 04:38:22.192801 2660 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:38:22.192813 kubelet[2660]: I0916 04:38:22.192818 2660 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:38:22.192963 kubelet[2660]: I0916 04:38:22.192944 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:38:22.192989 kubelet[2660]: I0916 04:38:22.192958 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:38:22.192989 kubelet[2660]: I0916 04:38:22.192974 2660 policy_none.go:49] "None policy: Start" Sep 16 04:38:22.193667 kubelet[2660]: I0916 04:38:22.193612 2660 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:38:22.193667 kubelet[2660]: I0916 04:38:22.193635 2660 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:38:22.193899 kubelet[2660]: I0916 04:38:22.193885 2660 state_mem.go:75] "Updated machine memory state" Sep 16 04:38:22.197635 kubelet[2660]: I0916 04:38:22.197615 2660 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:38:22.197771 kubelet[2660]: I0916 04:38:22.197757 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:38:22.197794 kubelet[2660]: I0916 04:38:22.197774 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:38:22.198511 kubelet[2660]: I0916 04:38:22.198488 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:38:22.274758 kubelet[2660]: E0916 04:38:22.274724 2660 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.302192 kubelet[2660]: I0916 04:38:22.302166 2660 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:38:22.309828 kubelet[2660]: I0916 04:38:22.309706 2660 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 16 04:38:22.309828 kubelet[2660]: I0916 04:38:22.309780 2660 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:38:22.449584 kubelet[2660]: I0916 04:38:22.449189 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.449584 kubelet[2660]: I0916 04:38:22.449248 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.449584 kubelet[2660]: I0916 04:38:22.449288 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:22.449584 kubelet[2660]: I0916 04:38:22.449316 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.449584 kubelet[2660]: I0916 04:38:22.449348 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.449811 kubelet[2660]: I0916 04:38:22.449365 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:38:22.449811 kubelet[2660]: I0916 04:38:22.449380 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:38:22.449811 kubelet[2660]: I0916 04:38:22.449394 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:22.449811 kubelet[2660]: I0916 04:38:22.449411 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3f07987b26304f10e16079125ccd355-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3f07987b26304f10e16079125ccd355\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:22.574911 kubelet[2660]: E0916 04:38:22.574399 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:22.574911 kubelet[2660]: E0916 04:38:22.574415 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:22.575131 kubelet[2660]: E0916 04:38:22.575025 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:23.140588 kubelet[2660]: I0916 04:38:23.140530 2660 apiserver.go:52] "Watching apiserver" Sep 16 04:38:23.147676 kubelet[2660]: I0916 04:38:23.147640 2660 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:38:23.179905 kubelet[2660]: I0916 04:38:23.179803 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1797672750000001 podStartE2EDuration="1.179767275s" podCreationTimestamp="2025-09-16 04:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:38:23.179597308 +0000 UTC m=+1.094392994" watchObservedRunningTime="2025-09-16 04:38:23.179767275 +0000 UTC m=+1.094562961" Sep 16 04:38:23.182676 kubelet[2660]: E0916 04:38:23.182650 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:23.183493 kubelet[2660]: E0916 04:38:23.183428 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:23.187263 kubelet[2660]: E0916 04:38:23.187236 2660 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:38:23.187787 kubelet[2660]: E0916 04:38:23.187482 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:23.198541 kubelet[2660]: I0916 04:38:23.198115 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.198100168 podStartE2EDuration="1.198100168s" podCreationTimestamp="2025-09-16 04:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:38:23.186496322 +0000 UTC m=+1.101292008" watchObservedRunningTime="2025-09-16 04:38:23.198100168 +0000 UTC m=+1.112895814" Sep 16 04:38:23.209252 kubelet[2660]: I0916 04:38:23.209197 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.209179157 podStartE2EDuration="2.209179157s" podCreationTimestamp="2025-09-16 04:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:38:23.199785848 +0000 UTC m=+1.114581534" watchObservedRunningTime="2025-09-16 04:38:23.209179157 +0000 UTC m=+1.123974843" Sep 16 04:38:24.184083 kubelet[2660]: E0916 04:38:24.184038 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:24.894196 kubelet[2660]: E0916 04:38:24.894136 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:26.360175 kubelet[2660]: E0916 04:38:26.360134 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:27.335552 kubelet[2660]: I0916 04:38:27.335373 2660 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:38:27.335720 containerd[1519]: time="2025-09-16T04:38:27.335685205Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:38:27.336033 kubelet[2660]: I0916 04:38:27.335923 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:38:28.147818 systemd[1]: Created slice kubepods-besteffort-pod6abcc37d_a15e_44de_8595_d2b6d9782289.slice - libcontainer container kubepods-besteffort-pod6abcc37d_a15e_44de_8595_d2b6d9782289.slice. Sep 16 04:38:28.192953 kubelet[2660]: I0916 04:38:28.192922 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6abcc37d-a15e-44de-8595-d2b6d9782289-lib-modules\") pod \"kube-proxy-d7kq7\" (UID: \"6abcc37d-a15e-44de-8595-d2b6d9782289\") " pod="kube-system/kube-proxy-d7kq7" Sep 16 04:38:28.193294 kubelet[2660]: I0916 04:38:28.192954 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfzv\" (UniqueName: \"kubernetes.io/projected/6abcc37d-a15e-44de-8595-d2b6d9782289-kube-api-access-chfzv\") pod \"kube-proxy-d7kq7\" (UID: \"6abcc37d-a15e-44de-8595-d2b6d9782289\") " pod="kube-system/kube-proxy-d7kq7" Sep 16 04:38:28.193294 kubelet[2660]: I0916 04:38:28.192989 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6abcc37d-a15e-44de-8595-d2b6d9782289-kube-proxy\") pod \"kube-proxy-d7kq7\" (UID: \"6abcc37d-a15e-44de-8595-d2b6d9782289\") " pod="kube-system/kube-proxy-d7kq7" Sep 16 04:38:28.193294 kubelet[2660]: I0916 04:38:28.193005 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6abcc37d-a15e-44de-8595-d2b6d9782289-xtables-lock\") pod \"kube-proxy-d7kq7\" (UID: \"6abcc37d-a15e-44de-8595-d2b6d9782289\") " pod="kube-system/kube-proxy-d7kq7" Sep 16 04:38:28.361461 systemd[1]: Created slice kubepods-besteffort-pod895a970c_8bd8_4a54_9426_b2b292c8c7bf.slice - libcontainer container kubepods-besteffort-pod895a970c_8bd8_4a54_9426_b2b292c8c7bf.slice. Sep 16 04:38:28.394563 kubelet[2660]: I0916 04:38:28.394499 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22jt\" (UniqueName: \"kubernetes.io/projected/895a970c-8bd8-4a54-9426-b2b292c8c7bf-kube-api-access-k22jt\") pod \"tigera-operator-58fc44c59b-zccfc\" (UID: \"895a970c-8bd8-4a54-9426-b2b292c8c7bf\") " pod="tigera-operator/tigera-operator-58fc44c59b-zccfc" Sep 16 04:38:28.394563 kubelet[2660]: I0916 04:38:28.394553 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/895a970c-8bd8-4a54-9426-b2b292c8c7bf-var-lib-calico\") pod \"tigera-operator-58fc44c59b-zccfc\" (UID: \"895a970c-8bd8-4a54-9426-b2b292c8c7bf\") " pod="tigera-operator/tigera-operator-58fc44c59b-zccfc" Sep 16 04:38:28.458988 kubelet[2660]: E0916 04:38:28.458791 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:28.459569 containerd[1519]: time="2025-09-16T04:38:28.459498633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7kq7,Uid:6abcc37d-a15e-44de-8595-d2b6d9782289,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:28.473533 containerd[1519]: time="2025-09-16T04:38:28.473441322Z" level=info msg="connecting to shim 04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b" address="unix:///run/containerd/s/f122a0b176a22f6ca3bd0801625cc5f15ed17ee4f76f954bdcd096ea8982193d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:28.493682 systemd[1]: Started cri-containerd-04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b.scope - libcontainer container 04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b. Sep 16 04:38:28.518057 containerd[1519]: time="2025-09-16T04:38:28.518019899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d7kq7,Uid:6abcc37d-a15e-44de-8595-d2b6d9782289,Namespace:kube-system,Attempt:0,} returns sandbox id \"04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b\"" Sep 16 04:38:28.518658 kubelet[2660]: E0916 04:38:28.518638 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:28.521705 containerd[1519]: time="2025-09-16T04:38:28.521677006Z" level=info msg="CreateContainer within sandbox \"04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:38:28.530570 containerd[1519]: time="2025-09-16T04:38:28.529718953Z" level=info msg="Container 46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:28.536967 containerd[1519]: time="2025-09-16T04:38:28.536934615Z" level=info msg="CreateContainer within sandbox \"04479a9e1ba34650308266469874b4314a9dd5877778112cecb978f751a7ff3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0\"" Sep 16 04:38:28.537536 containerd[1519]: time="2025-09-16T04:38:28.537486965Z" level=info msg="StartContainer for \"46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0\"" Sep 16 04:38:28.538978 containerd[1519]: time="2025-09-16T04:38:28.538926393Z" level=info msg="connecting to shim 46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0" address="unix:///run/containerd/s/f122a0b176a22f6ca3bd0801625cc5f15ed17ee4f76f954bdcd096ea8982193d" protocol=ttrpc version=3 Sep 16 04:38:28.557666 systemd[1]: Started cri-containerd-46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0.scope - libcontainer container 46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0. Sep 16 04:38:28.590820 containerd[1519]: time="2025-09-16T04:38:28.590778468Z" level=info msg="StartContainer for \"46f159a014dd796b419a5e54d1a0d4c258d9a68e2013eaa26a8cfa6c2dc078c0\" returns successfully" Sep 16 04:38:28.665857 containerd[1519]: time="2025-09-16T04:38:28.665815509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-zccfc,Uid:895a970c-8bd8-4a54-9426-b2b292c8c7bf,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:38:28.681603 containerd[1519]: time="2025-09-16T04:38:28.681559594Z" level=info msg="connecting to shim d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819" address="unix:///run/containerd/s/ba4b6421d868171081c70bcc428c5c835817585ecdb473610095d6b003ddacd1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:28.705697 systemd[1]: Started cri-containerd-d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819.scope - libcontainer container d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819. Sep 16 04:38:28.737496 containerd[1519]: time="2025-09-16T04:38:28.737402064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-zccfc,Uid:895a970c-8bd8-4a54-9426-b2b292c8c7bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819\"" Sep 16 04:38:28.739147 containerd[1519]: time="2025-09-16T04:38:28.739125027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:38:29.195750 kubelet[2660]: E0916 04:38:29.195714 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:29.204212 kubelet[2660]: I0916 04:38:29.204118 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d7kq7" podStartSLOduration=1.204103628 podStartE2EDuration="1.204103628s" podCreationTimestamp="2025-09-16 04:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:38:29.204027955 +0000 UTC m=+7.118823641" watchObservedRunningTime="2025-09-16 04:38:29.204103628 +0000 UTC m=+7.118899314" Sep 16 04:38:29.306922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280094072.mount: Deactivated successfully. Sep 16 04:38:30.004020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424201108.mount: Deactivated successfully. Sep 16 04:38:30.677982 containerd[1519]: time="2025-09-16T04:38:30.677929016Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:30.678391 containerd[1519]: time="2025-09-16T04:38:30.678361661Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 16 04:38:30.679326 containerd[1519]: time="2025-09-16T04:38:30.679299984Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:30.681184 containerd[1519]: time="2025-09-16T04:38:30.681148514Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:30.681841 containerd[1519]: time="2025-09-16T04:38:30.681812900Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.942657995s" Sep 16 04:38:30.681887 containerd[1519]: time="2025-09-16T04:38:30.681846697Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 16 04:38:30.684170 containerd[1519]: time="2025-09-16T04:38:30.684141350Z" level=info msg="CreateContainer within sandbox \"d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:38:30.695210 containerd[1519]: time="2025-09-16T04:38:30.694654653Z" level=info msg="Container 72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:30.699352 containerd[1519]: time="2025-09-16T04:38:30.699321913Z" level=info msg="CreateContainer within sandbox \"d4cef82fe63cb9c9b337364ccbc70c34c7bd8154a60623b6308c309a465f4819\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9\"" Sep 16 04:38:30.700116 containerd[1519]: time="2025-09-16T04:38:30.700076892Z" level=info msg="StartContainer for \"72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9\"" Sep 16 04:38:30.701156 containerd[1519]: time="2025-09-16T04:38:30.701100688Z" level=info msg="connecting to shim 72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9" address="unix:///run/containerd/s/ba4b6421d868171081c70bcc428c5c835817585ecdb473610095d6b003ddacd1" protocol=ttrpc version=3 Sep 16 04:38:30.725670 systemd[1]: Started cri-containerd-72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9.scope - libcontainer container 72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9. Sep 16 04:38:30.752885 containerd[1519]: time="2025-09-16T04:38:30.752849792Z" level=info msg="StartContainer for \"72aa5a7c48c7ba961485af0feab4e08dceb3e4bf189729b3e04bd77563e061f9\" returns successfully" Sep 16 04:38:31.208136 kubelet[2660]: I0916 04:38:31.207737 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-zccfc" podStartSLOduration=1.263696599 podStartE2EDuration="3.207719157s" podCreationTimestamp="2025-09-16 04:38:28 +0000 UTC" firstStartedPulling="2025-09-16 04:38:28.738583397 +0000 UTC m=+6.653379083" lastFinishedPulling="2025-09-16 04:38:30.682605955 +0000 UTC m=+8.597401641" observedRunningTime="2025-09-16 04:38:31.207622484 +0000 UTC m=+9.122418130" watchObservedRunningTime="2025-09-16 04:38:31.207719157 +0000 UTC m=+9.122514843" Sep 16 04:38:32.391220 kubelet[2660]: E0916 04:38:32.391185 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:33.205686 kubelet[2660]: E0916 04:38:33.205266 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:34.902244 kubelet[2660]: E0916 04:38:34.902197 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:36.054897 sudo[1737]: pam_unix(sudo:session): session closed for user root Sep 16 04:38:36.057753 sshd[1736]: Connection closed by 10.0.0.1 port 57010 Sep 16 04:38:36.058249 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 16 04:38:36.062123 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:57010.service: Deactivated successfully. Sep 16 04:38:36.064130 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:38:36.064306 systemd[1]: session-7.scope: Consumed 6.009s CPU time, 218.8M memory peak. Sep 16 04:38:36.066325 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:38:36.069145 systemd-logind[1503]: Removed session 7. Sep 16 04:38:36.369998 kubelet[2660]: E0916 04:38:36.369892 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:38.529101 update_engine[1504]: I20250916 04:38:38.529033 1504 update_attempter.cc:509] Updating boot flags... Sep 16 04:38:41.377187 systemd[1]: Created slice kubepods-besteffort-pod4cba4019_8600_42d4_a2b6_4adc2315e027.slice - libcontainer container kubepods-besteffort-pod4cba4019_8600_42d4_a2b6_4adc2315e027.slice. Sep 16 04:38:41.482161 kubelet[2660]: I0916 04:38:41.482101 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cba4019-8600-42d4-a2b6-4adc2315e027-tigera-ca-bundle\") pod \"calico-typha-5f875dbc74-pmn4m\" (UID: \"4cba4019-8600-42d4-a2b6-4adc2315e027\") " pod="calico-system/calico-typha-5f875dbc74-pmn4m" Sep 16 04:38:41.482161 kubelet[2660]: I0916 04:38:41.482156 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4cba4019-8600-42d4-a2b6-4adc2315e027-typha-certs\") pod \"calico-typha-5f875dbc74-pmn4m\" (UID: \"4cba4019-8600-42d4-a2b6-4adc2315e027\") " pod="calico-system/calico-typha-5f875dbc74-pmn4m" Sep 16 04:38:41.486477 kubelet[2660]: I0916 04:38:41.486434 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrhzn\" (UniqueName: \"kubernetes.io/projected/4cba4019-8600-42d4-a2b6-4adc2315e027-kube-api-access-lrhzn\") pod \"calico-typha-5f875dbc74-pmn4m\" (UID: \"4cba4019-8600-42d4-a2b6-4adc2315e027\") " pod="calico-system/calico-typha-5f875dbc74-pmn4m" Sep 16 04:38:41.649240 systemd[1]: Created slice kubepods-besteffort-pod42e4d2db_7eaa_4648_bf7d_ace658413143.slice - libcontainer container kubepods-besteffort-pod42e4d2db_7eaa_4648_bf7d_ace658413143.slice. Sep 16 04:38:41.685946 kubelet[2660]: E0916 04:38:41.685906 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:41.688248 containerd[1519]: time="2025-09-16T04:38:41.688203350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f875dbc74-pmn4m,Uid:4cba4019-8600-42d4-a2b6-4adc2315e027,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:41.689668 kubelet[2660]: I0916 04:38:41.689552 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42e4d2db-7eaa-4648-bf7d-ace658413143-node-certs\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.689942 kubelet[2660]: I0916 04:38:41.689859 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-var-lib-calico\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.690198 kubelet[2660]: I0916 04:38:41.690103 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-flexvol-driver-host\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.690349 kubelet[2660]: I0916 04:38:41.690265 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-cni-log-dir\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.690867 kubelet[2660]: I0916 04:38:41.690289 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-cni-net-dir\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.690867 kubelet[2660]: I0916 04:38:41.690758 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42e4d2db-7eaa-4648-bf7d-ace658413143-tigera-ca-bundle\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.691404 kubelet[2660]: I0916 04:38:41.691321 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-xtables-lock\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.691577 kubelet[2660]: I0916 04:38:41.691562 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-cni-bin-dir\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.691873 kubelet[2660]: I0916 04:38:41.691744 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-lib-modules\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.692078 kubelet[2660]: I0916 04:38:41.692060 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49qf\" (UniqueName: \"kubernetes.io/projected/42e4d2db-7eaa-4648-bf7d-ace658413143-kube-api-access-h49qf\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.692703 kubelet[2660]: I0916 04:38:41.692514 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-var-run-calico\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.693032 kubelet[2660]: I0916 04:38:41.692869 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42e4d2db-7eaa-4648-bf7d-ace658413143-policysync\") pod \"calico-node-2bbmv\" (UID: \"42e4d2db-7eaa-4648-bf7d-ace658413143\") " pod="calico-system/calico-node-2bbmv" Sep 16 04:38:41.727782 containerd[1519]: time="2025-09-16T04:38:41.727723379Z" level=info msg="connecting to shim 4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453" address="unix:///run/containerd/s/4917e18135b1f9bfc556a7161be208c05399ee0fdcbbbf02fc02da99e091cd8c" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:41.781797 systemd[1]: Started cri-containerd-4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453.scope - libcontainer container 4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453. Sep 16 04:38:41.811458 kubelet[2660]: E0916 04:38:41.811303 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.811458 kubelet[2660]: W0916 04:38:41.811389 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.811458 kubelet[2660]: E0916 04:38:41.811416 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.829212 kubelet[2660]: E0916 04:38:41.828242 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xz6q" podUID="21b07629-5d22-4793-9ce0-70f06e5a1f49" Sep 16 04:38:41.856335 containerd[1519]: time="2025-09-16T04:38:41.856274448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f875dbc74-pmn4m,Uid:4cba4019-8600-42d4-a2b6-4adc2315e027,Namespace:calico-system,Attempt:0,} returns sandbox id \"4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453\"" Sep 16 04:38:41.861556 kubelet[2660]: E0916 04:38:41.861273 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:41.863263 containerd[1519]: time="2025-09-16T04:38:41.863222490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:38:41.890209 kubelet[2660]: E0916 04:38:41.890045 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.890209 kubelet[2660]: W0916 04:38:41.890087 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.890441 kubelet[2660]: E0916 04:38:41.890108 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.890710 kubelet[2660]: E0916 04:38:41.890640 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.890710 kubelet[2660]: W0916 04:38:41.890656 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.890710 kubelet[2660]: E0916 04:38:41.890669 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.891106 kubelet[2660]: E0916 04:38:41.891039 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.891106 kubelet[2660]: W0916 04:38:41.891053 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.891106 kubelet[2660]: E0916 04:38:41.891064 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.891544 kubelet[2660]: E0916 04:38:41.891462 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.891544 kubelet[2660]: W0916 04:38:41.891475 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.891544 kubelet[2660]: E0916 04:38:41.891486 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.892683 kubelet[2660]: E0916 04:38:41.892666 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.893043 kubelet[2660]: W0916 04:38:41.892968 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.893043 kubelet[2660]: E0916 04:38:41.892993 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.893903 kubelet[2660]: E0916 04:38:41.893691 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.893903 kubelet[2660]: W0916 04:38:41.893838 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.893903 kubelet[2660]: E0916 04:38:41.893855 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.894181 kubelet[2660]: E0916 04:38:41.894168 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.894272 kubelet[2660]: W0916 04:38:41.894238 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.894412 kubelet[2660]: E0916 04:38:41.894327 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.894720 kubelet[2660]: E0916 04:38:41.894707 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.894821 kubelet[2660]: W0916 04:38:41.894784 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.894821 kubelet[2660]: E0916 04:38:41.894801 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.895099 kubelet[2660]: E0916 04:38:41.895085 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.895203 kubelet[2660]: W0916 04:38:41.895152 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.895203 kubelet[2660]: E0916 04:38:41.895167 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.895500 kubelet[2660]: E0916 04:38:41.895441 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.895500 kubelet[2660]: W0916 04:38:41.895456 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.895500 kubelet[2660]: E0916 04:38:41.895466 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.896807 kubelet[2660]: E0916 04:38:41.896707 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.896807 kubelet[2660]: W0916 04:38:41.896732 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.896807 kubelet[2660]: E0916 04:38:41.896745 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.897549 kubelet[2660]: E0916 04:38:41.897481 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.897549 kubelet[2660]: W0916 04:38:41.897496 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.897549 kubelet[2660]: E0916 04:38:41.897507 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.899110 kubelet[2660]: E0916 04:38:41.899093 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.899236 kubelet[2660]: W0916 04:38:41.899179 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.899236 kubelet[2660]: E0916 04:38:41.899196 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.899451 kubelet[2660]: E0916 04:38:41.899438 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.899612 kubelet[2660]: W0916 04:38:41.899500 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.899612 kubelet[2660]: E0916 04:38:41.899515 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.899751 kubelet[2660]: E0916 04:38:41.899740 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.899802 kubelet[2660]: W0916 04:38:41.899792 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.901050 kubelet[2660]: E0916 04:38:41.899842 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.901304 kubelet[2660]: E0916 04:38:41.901290 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.901397 kubelet[2660]: W0916 04:38:41.901383 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.901640 kubelet[2660]: E0916 04:38:41.901624 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.903547 kubelet[2660]: E0916 04:38:41.903501 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.903620 kubelet[2660]: W0916 04:38:41.903606 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.903646 kubelet[2660]: E0916 04:38:41.903621 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.904112 kubelet[2660]: E0916 04:38:41.904093 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.904112 kubelet[2660]: W0916 04:38:41.904110 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.904182 kubelet[2660]: E0916 04:38:41.904122 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.905629 kubelet[2660]: E0916 04:38:41.905609 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.905629 kubelet[2660]: W0916 04:38:41.905628 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.905702 kubelet[2660]: E0916 04:38:41.905641 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.905879 kubelet[2660]: E0916 04:38:41.905862 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.905879 kubelet[2660]: W0916 04:38:41.905876 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.905952 kubelet[2660]: E0916 04:38:41.905886 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.906195 kubelet[2660]: E0916 04:38:41.906179 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.906195 kubelet[2660]: W0916 04:38:41.906193 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.906246 kubelet[2660]: E0916 04:38:41.906204 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.906246 kubelet[2660]: I0916 04:38:41.906228 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21b07629-5d22-4793-9ce0-70f06e5a1f49-kubelet-dir\") pod \"csi-node-driver-2xz6q\" (UID: \"21b07629-5d22-4793-9ce0-70f06e5a1f49\") " pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:41.906431 kubelet[2660]: E0916 04:38:41.906415 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.906431 kubelet[2660]: W0916 04:38:41.906431 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.906488 kubelet[2660]: E0916 04:38:41.906445 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.906488 kubelet[2660]: I0916 04:38:41.906460 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21b07629-5d22-4793-9ce0-70f06e5a1f49-registration-dir\") pod \"csi-node-driver-2xz6q\" (UID: \"21b07629-5d22-4793-9ce0-70f06e5a1f49\") " pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:41.907110 kubelet[2660]: E0916 04:38:41.907086 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.907110 kubelet[2660]: W0916 04:38:41.907105 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.907227 kubelet[2660]: E0916 04:38:41.907124 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.907227 kubelet[2660]: I0916 04:38:41.907144 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21b07629-5d22-4793-9ce0-70f06e5a1f49-socket-dir\") pod \"csi-node-driver-2xz6q\" (UID: \"21b07629-5d22-4793-9ce0-70f06e5a1f49\") " pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:41.908782 kubelet[2660]: E0916 04:38:41.908749 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.908782 kubelet[2660]: W0916 04:38:41.908769 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.908910 kubelet[2660]: E0916 04:38:41.908808 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.908910 kubelet[2660]: I0916 04:38:41.908834 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21b07629-5d22-4793-9ce0-70f06e5a1f49-varrun\") pod \"csi-node-driver-2xz6q\" (UID: \"21b07629-5d22-4793-9ce0-70f06e5a1f49\") " pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:41.909168 kubelet[2660]: E0916 04:38:41.909119 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.909168 kubelet[2660]: W0916 04:38:41.909138 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.909168 kubelet[2660]: E0916 04:38:41.909160 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.909313 kubelet[2660]: E0916 04:38:41.909295 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.909313 kubelet[2660]: W0916 04:38:41.909305 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.909382 kubelet[2660]: E0916 04:38:41.909329 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.909478 kubelet[2660]: E0916 04:38:41.909461 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.909478 kubelet[2660]: W0916 04:38:41.909471 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.909557 kubelet[2660]: E0916 04:38:41.909493 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.909674 kubelet[2660]: E0916 04:38:41.909659 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.909674 kubelet[2660]: W0916 04:38:41.909672 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.909736 kubelet[2660]: E0916 04:38:41.909698 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.909873 kubelet[2660]: E0916 04:38:41.909858 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.909873 kubelet[2660]: W0916 04:38:41.909871 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.909917 kubelet[2660]: E0916 04:38:41.909885 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.909917 kubelet[2660]: I0916 04:38:41.909905 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6799\" (UniqueName: \"kubernetes.io/projected/21b07629-5d22-4793-9ce0-70f06e5a1f49-kube-api-access-j6799\") pod \"csi-node-driver-2xz6q\" (UID: \"21b07629-5d22-4793-9ce0-70f06e5a1f49\") " pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:41.910049 kubelet[2660]: E0916 04:38:41.910037 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.910073 kubelet[2660]: W0916 04:38:41.910048 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.910073 kubelet[2660]: E0916 04:38:41.910064 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.910311 kubelet[2660]: E0916 04:38:41.910297 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.910334 kubelet[2660]: W0916 04:38:41.910311 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.910334 kubelet[2660]: E0916 04:38:41.910321 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.910515 kubelet[2660]: E0916 04:38:41.910502 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.910648 kubelet[2660]: W0916 04:38:41.910514 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.910676 kubelet[2660]: E0916 04:38:41.910656 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.911397 kubelet[2660]: E0916 04:38:41.911356 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.911431 kubelet[2660]: W0916 04:38:41.911399 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.911431 kubelet[2660]: E0916 04:38:41.911412 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.911643 kubelet[2660]: E0916 04:38:41.911629 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.911643 kubelet[2660]: W0916 04:38:41.911642 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.911712 kubelet[2660]: E0916 04:38:41.911652 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.911814 kubelet[2660]: E0916 04:38:41.911801 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:41.911814 kubelet[2660]: W0916 04:38:41.911812 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:41.911858 kubelet[2660]: E0916 04:38:41.911820 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:41.954365 containerd[1519]: time="2025-09-16T04:38:41.954291596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bbmv,Uid:42e4d2db-7eaa-4648-bf7d-ace658413143,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:41.974182 containerd[1519]: time="2025-09-16T04:38:41.974114888Z" level=info msg="connecting to shim 58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b" address="unix:///run/containerd/s/d901dd08d2b04d3f9d9ac6d2a44448764032575d1d5f0b9051bb73e789df51db" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:42.000694 systemd[1]: Started cri-containerd-58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b.scope - libcontainer container 58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b. Sep 16 04:38:42.012714 kubelet[2660]: E0916 04:38:42.012614 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.013548 kubelet[2660]: W0916 04:38:42.013406 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.013548 kubelet[2660]: E0916 04:38:42.013442 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.014990 kubelet[2660]: E0916 04:38:42.014501 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.015148 kubelet[2660]: W0916 04:38:42.015116 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.015918 kubelet[2660]: E0916 04:38:42.015570 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.015918 kubelet[2660]: E0916 04:38:42.015791 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.015918 kubelet[2660]: W0916 04:38:42.015805 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.015918 kubelet[2660]: E0916 04:38:42.015822 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.016694 kubelet[2660]: E0916 04:38:42.016667 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.016694 kubelet[2660]: W0916 04:38:42.016692 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.016802 kubelet[2660]: E0916 04:38:42.016710 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.017209 kubelet[2660]: E0916 04:38:42.017175 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.017209 kubelet[2660]: W0916 04:38:42.017192 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.017705 kubelet[2660]: E0916 04:38:42.017417 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.018490 kubelet[2660]: E0916 04:38:42.017964 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.018490 kubelet[2660]: W0916 04:38:42.017978 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.018490 kubelet[2660]: E0916 04:38:42.018062 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.018490 kubelet[2660]: E0916 04:38:42.018165 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.018490 kubelet[2660]: W0916 04:38:42.018174 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.018490 kubelet[2660]: E0916 04:38:42.018279 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.018668 kubelet[2660]: E0916 04:38:42.018503 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.018668 kubelet[2660]: W0916 04:38:42.018555 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.018668 kubelet[2660]: E0916 04:38:42.018602 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.018828 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.019600 kubelet[2660]: W0916 04:38:42.018842 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.018858 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.019114 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.019600 kubelet[2660]: W0916 04:38:42.019140 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.019155 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.019360 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.019600 kubelet[2660]: W0916 04:38:42.019393 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.019458 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.019600 kubelet[2660]: E0916 04:38:42.019598 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.020855 kubelet[2660]: W0916 04:38:42.019607 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.019657 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.019770 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.020855 kubelet[2660]: W0916 04:38:42.019778 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.019811 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.019908 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.020855 kubelet[2660]: W0916 04:38:42.019915 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.019930 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.020855 kubelet[2660]: E0916 04:38:42.020206 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.020855 kubelet[2660]: W0916 04:38:42.020214 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020229 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020349 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.021055 kubelet[2660]: W0916 04:38:42.020382 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020394 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020601 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.021055 kubelet[2660]: W0916 04:38:42.020611 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020652 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020741 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.021055 kubelet[2660]: W0916 04:38:42.020748 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021055 kubelet[2660]: E0916 04:38:42.020781 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.021241 kubelet[2660]: E0916 04:38:42.020876 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.021241 kubelet[2660]: W0916 04:38:42.020885 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021241 kubelet[2660]: E0916 04:38:42.020921 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.021241 kubelet[2660]: E0916 04:38:42.021014 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.021241 kubelet[2660]: W0916 04:38:42.021021 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.021241 kubelet[2660]: E0916 04:38:42.021083 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.022222 kubelet[2660]: E0916 04:38:42.021635 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.022222 kubelet[2660]: W0916 04:38:42.021651 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.022222 kubelet[2660]: E0916 04:38:42.021739 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.022222 kubelet[2660]: E0916 04:38:42.022049 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.022222 kubelet[2660]: W0916 04:38:42.022060 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.022222 kubelet[2660]: E0916 04:38:42.022106 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.022642 kubelet[2660]: E0916 04:38:42.022626 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.022642 kubelet[2660]: W0916 04:38:42.022642 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.022792 kubelet[2660]: E0916 04:38:42.022750 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.023514 kubelet[2660]: E0916 04:38:42.023486 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.023514 kubelet[2660]: W0916 04:38:42.023502 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.023514 kubelet[2660]: E0916 04:38:42.023515 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.024690 kubelet[2660]: E0916 04:38:42.024599 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.024903 kubelet[2660]: W0916 04:38:42.024883 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.025753 kubelet[2660]: E0916 04:38:42.025682 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.036574 kubelet[2660]: E0916 04:38:42.035641 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:42.036815 kubelet[2660]: W0916 04:38:42.036724 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:42.036815 kubelet[2660]: E0916 04:38:42.036756 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:42.099995 containerd[1519]: time="2025-09-16T04:38:42.099939856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bbmv,Uid:42e4d2db-7eaa-4648-bf7d-ace658413143,Namespace:calico-system,Attempt:0,} returns sandbox id \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\"" Sep 16 04:38:42.968436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386396758.mount: Deactivated successfully. Sep 16 04:38:43.168069 kubelet[2660]: E0916 04:38:43.168023 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xz6q" podUID="21b07629-5d22-4793-9ce0-70f06e5a1f49" Sep 16 04:38:43.646139 containerd[1519]: time="2025-09-16T04:38:43.646083956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:43.646617 containerd[1519]: time="2025-09-16T04:38:43.646579935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 16 04:38:43.647430 containerd[1519]: time="2025-09-16T04:38:43.647393501Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:43.651235 containerd[1519]: time="2025-09-16T04:38:43.651193303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:43.652184 containerd[1519]: time="2025-09-16T04:38:43.651968510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.788702223s" Sep 16 04:38:43.652184 containerd[1519]: time="2025-09-16T04:38:43.652000229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 16 04:38:43.657463 containerd[1519]: time="2025-09-16T04:38:43.657437843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:38:43.681166 containerd[1519]: time="2025-09-16T04:38:43.681107137Z" level=info msg="CreateContainer within sandbox \"4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:38:43.688831 containerd[1519]: time="2025-09-16T04:38:43.688419792Z" level=info msg="Container 3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:43.695165 containerd[1519]: time="2025-09-16T04:38:43.695119593Z" level=info msg="CreateContainer within sandbox \"4777c8ca1bd7e085cea603985e3de7b2e86d5d5ee7bf25ea9db857849f8d9453\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323\"" Sep 16 04:38:43.695760 containerd[1519]: time="2025-09-16T04:38:43.695716088Z" level=info msg="StartContainer for \"3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323\"" Sep 16 04:38:43.697020 containerd[1519]: time="2025-09-16T04:38:43.696953557Z" level=info msg="connecting to shim 3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323" address="unix:///run/containerd/s/4917e18135b1f9bfc556a7161be208c05399ee0fdcbbbf02fc02da99e091cd8c" protocol=ttrpc version=3 Sep 16 04:38:43.717659 systemd[1]: Started cri-containerd-3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323.scope - libcontainer container 3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323. Sep 16 04:38:43.763026 containerd[1519]: time="2025-09-16T04:38:43.762922449Z" level=info msg="StartContainer for \"3b70eb9482e50c0a45f89a48f20a3305fd32225d8a6d63ee884b04ffa9d0f323\" returns successfully" Sep 16 04:38:44.229213 kubelet[2660]: E0916 04:38:44.229142 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:44.240038 kubelet[2660]: I0916 04:38:44.239893 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f875dbc74-pmn4m" podStartSLOduration=1.444447258 podStartE2EDuration="3.239677844s" podCreationTimestamp="2025-09-16 04:38:41 +0000 UTC" firstStartedPulling="2025-09-16 04:38:41.862059983 +0000 UTC m=+19.776855669" lastFinishedPulling="2025-09-16 04:38:43.657290569 +0000 UTC m=+21.572086255" observedRunningTime="2025-09-16 04:38:44.239378456 +0000 UTC m=+22.154174142" watchObservedRunningTime="2025-09-16 04:38:44.239677844 +0000 UTC m=+22.154473530" Sep 16 04:38:44.324312 kubelet[2660]: E0916 04:38:44.324279 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.324312 kubelet[2660]: W0916 04:38:44.324304 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.324500 kubelet[2660]: E0916 04:38:44.324325 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.324500 kubelet[2660]: E0916 04:38:44.324463 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.324500 kubelet[2660]: W0916 04:38:44.324471 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.324500 kubelet[2660]: E0916 04:38:44.324478 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.324655 kubelet[2660]: E0916 04:38:44.324616 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.324655 kubelet[2660]: W0916 04:38:44.324625 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.324655 kubelet[2660]: E0916 04:38:44.324632 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.324765 kubelet[2660]: E0916 04:38:44.324754 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.324798 kubelet[2660]: W0916 04:38:44.324766 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.324798 kubelet[2660]: E0916 04:38:44.324775 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.324923 kubelet[2660]: E0916 04:38:44.324909 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.324923 kubelet[2660]: W0916 04:38:44.324917 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.324923 kubelet[2660]: E0916 04:38:44.324924 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.325067 kubelet[2660]: E0916 04:38:44.325057 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.325067 kubelet[2660]: W0916 04:38:44.325067 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.325125 kubelet[2660]: E0916 04:38:44.325074 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.325197 kubelet[2660]: E0916 04:38:44.325188 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.325197 kubelet[2660]: W0916 04:38:44.325197 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.325257 kubelet[2660]: E0916 04:38:44.325205 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.325396 kubelet[2660]: E0916 04:38:44.325383 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.325396 kubelet[2660]: W0916 04:38:44.325393 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.325451 kubelet[2660]: E0916 04:38:44.325401 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.325571 kubelet[2660]: E0916 04:38:44.325558 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.325571 kubelet[2660]: W0916 04:38:44.325568 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.325631 kubelet[2660]: E0916 04:38:44.325576 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325706 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326575 kubelet[2660]: W0916 04:38:44.325716 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325723 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325834 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326575 kubelet[2660]: W0916 04:38:44.325840 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325847 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325955 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326575 kubelet[2660]: W0916 04:38:44.325962 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.325969 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326575 kubelet[2660]: E0916 04:38:44.326092 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326805 kubelet[2660]: W0916 04:38:44.326099 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326805 kubelet[2660]: E0916 04:38:44.326105 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326805 kubelet[2660]: E0916 04:38:44.326224 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326805 kubelet[2660]: W0916 04:38:44.326230 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326805 kubelet[2660]: E0916 04:38:44.326237 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.326805 kubelet[2660]: E0916 04:38:44.326359 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.326805 kubelet[2660]: W0916 04:38:44.326366 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.326805 kubelet[2660]: E0916 04:38:44.326373 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.334931 kubelet[2660]: E0916 04:38:44.334903 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.334931 kubelet[2660]: W0916 04:38:44.334921 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.334931 kubelet[2660]: E0916 04:38:44.334932 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335135 kubelet[2660]: E0916 04:38:44.335110 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335135 kubelet[2660]: W0916 04:38:44.335123 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335135 kubelet[2660]: E0916 04:38:44.335132 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335347 kubelet[2660]: E0916 04:38:44.335324 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335347 kubelet[2660]: W0916 04:38:44.335336 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335401 kubelet[2660]: E0916 04:38:44.335349 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335561 kubelet[2660]: E0916 04:38:44.335509 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335561 kubelet[2660]: W0916 04:38:44.335535 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335561 kubelet[2660]: E0916 04:38:44.335552 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.335726 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335930 kubelet[2660]: W0916 04:38:44.335736 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.335751 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.335905 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335930 kubelet[2660]: W0916 04:38:44.335913 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.335927 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.336126 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.335930 kubelet[2660]: W0916 04:38:44.336134 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.335930 kubelet[2660]: E0916 04:38:44.336166 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.336447 kubelet[2660]: E0916 04:38:44.336416 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.336447 kubelet[2660]: W0916 04:38:44.336432 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.336491 kubelet[2660]: E0916 04:38:44.336451 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.337176 kubelet[2660]: E0916 04:38:44.337136 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.337176 kubelet[2660]: W0916 04:38:44.337150 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.337260 kubelet[2660]: E0916 04:38:44.337191 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.337415 kubelet[2660]: E0916 04:38:44.337348 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.337415 kubelet[2660]: W0916 04:38:44.337364 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.337415 kubelet[2660]: E0916 04:38:44.337388 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.337784 kubelet[2660]: E0916 04:38:44.337578 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.337784 kubelet[2660]: W0916 04:38:44.337589 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.337784 kubelet[2660]: E0916 04:38:44.337698 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.337784 kubelet[2660]: E0916 04:38:44.337859 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.337784 kubelet[2660]: W0916 04:38:44.337870 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.337784 kubelet[2660]: E0916 04:38:44.337885 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.338115 kubelet[2660]: E0916 04:38:44.338054 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.338115 kubelet[2660]: W0916 04:38:44.338062 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.338115 kubelet[2660]: E0916 04:38:44.338077 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.338414 kubelet[2660]: E0916 04:38:44.338332 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.338414 kubelet[2660]: W0916 04:38:44.338349 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.338414 kubelet[2660]: E0916 04:38:44.338361 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.338591 kubelet[2660]: E0916 04:38:44.338577 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.338591 kubelet[2660]: W0916 04:38:44.338590 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.338642 kubelet[2660]: E0916 04:38:44.338607 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.338984 kubelet[2660]: E0916 04:38:44.338830 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.338984 kubelet[2660]: W0916 04:38:44.338849 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.338984 kubelet[2660]: E0916 04:38:44.338864 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.339167 kubelet[2660]: E0916 04:38:44.339140 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.339167 kubelet[2660]: W0916 04:38:44.339156 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.339218 kubelet[2660]: E0916 04:38:44.339168 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.339383 kubelet[2660]: E0916 04:38:44.339365 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:38:44.339416 kubelet[2660]: W0916 04:38:44.339383 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:38:44.339416 kubelet[2660]: E0916 04:38:44.339393 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:38:44.784654 containerd[1519]: time="2025-09-16T04:38:44.784604259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:44.785268 containerd[1519]: time="2025-09-16T04:38:44.785212034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 16 04:38:44.786074 containerd[1519]: time="2025-09-16T04:38:44.786030082Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:44.789207 containerd[1519]: time="2025-09-16T04:38:44.789162757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:44.790264 containerd[1519]: time="2025-09-16T04:38:44.790154718Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.132649718s" Sep 16 04:38:44.790264 containerd[1519]: time="2025-09-16T04:38:44.790185397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 16 04:38:44.793858 containerd[1519]: time="2025-09-16T04:38:44.793805693Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:38:44.804202 containerd[1519]: time="2025-09-16T04:38:44.804138442Z" level=info msg="Container ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:44.812641 containerd[1519]: time="2025-09-16T04:38:44.812595906Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\"" Sep 16 04:38:44.813112 containerd[1519]: time="2025-09-16T04:38:44.813082366Z" level=info msg="StartContainer for \"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\"" Sep 16 04:38:44.814467 containerd[1519]: time="2025-09-16T04:38:44.814438512Z" level=info msg="connecting to shim ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc" address="unix:///run/containerd/s/d901dd08d2b04d3f9d9ac6d2a44448764032575d1d5f0b9051bb73e789df51db" protocol=ttrpc version=3 Sep 16 04:38:44.835700 systemd[1]: Started cri-containerd-ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc.scope - libcontainer container ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc. Sep 16 04:38:44.883694 systemd[1]: cri-containerd-ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc.scope: Deactivated successfully. Sep 16 04:38:44.967145 containerd[1519]: time="2025-09-16T04:38:44.967036925Z" level=info msg="StartContainer for \"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\" returns successfully" Sep 16 04:38:45.016873 containerd[1519]: time="2025-09-16T04:38:45.016816173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\" id:\"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\" pid:3355 exited_at:{seconds:1757997524 nanos:954319191}" Sep 16 04:38:45.016873 containerd[1519]: time="2025-09-16T04:38:45.016855252Z" level=info msg="received exit event container_id:\"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\" id:\"ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc\" pid:3355 exited_at:{seconds:1757997524 nanos:954319191}" Sep 16 04:38:45.048964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddb6589edadbc3ab8ecd6f9d87a4e25b3df33db1edb700cfaab8ea8f8c1d6ebc-rootfs.mount: Deactivated successfully. Sep 16 04:38:45.168098 kubelet[2660]: E0916 04:38:45.168043 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xz6q" podUID="21b07629-5d22-4793-9ce0-70f06e5a1f49" Sep 16 04:38:45.232348 kubelet[2660]: I0916 04:38:45.232308 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:38:45.233663 containerd[1519]: time="2025-09-16T04:38:45.233624339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:38:45.234579 kubelet[2660]: E0916 04:38:45.234558 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:47.169012 kubelet[2660]: E0916 04:38:47.168616 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xz6q" podUID="21b07629-5d22-4793-9ce0-70f06e5a1f49" Sep 16 04:38:48.088575 containerd[1519]: time="2025-09-16T04:38:48.088509241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:48.089821 containerd[1519]: time="2025-09-16T04:38:48.089779079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 16 04:38:48.090626 containerd[1519]: time="2025-09-16T04:38:48.090590172Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:48.093144 containerd[1519]: time="2025-09-16T04:38:48.093102088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:48.094366 containerd[1519]: time="2025-09-16T04:38:48.094322967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.86066007s" Sep 16 04:38:48.094427 containerd[1519]: time="2025-09-16T04:38:48.094366646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 16 04:38:48.096409 containerd[1519]: time="2025-09-16T04:38:48.096370179Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:38:48.105200 containerd[1519]: time="2025-09-16T04:38:48.104645984Z" level=info msg="Container 47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:48.118771 containerd[1519]: time="2025-09-16T04:38:48.118724635Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\"" Sep 16 04:38:48.119444 containerd[1519]: time="2025-09-16T04:38:48.119414372Z" level=info msg="StartContainer for \"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\"" Sep 16 04:38:48.125151 containerd[1519]: time="2025-09-16T04:38:48.125075903Z" level=info msg="connecting to shim 47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803" address="unix:///run/containerd/s/d901dd08d2b04d3f9d9ac6d2a44448764032575d1d5f0b9051bb73e789df51db" protocol=ttrpc version=3 Sep 16 04:38:48.147731 systemd[1]: Started cri-containerd-47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803.scope - libcontainer container 47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803. Sep 16 04:38:48.188396 containerd[1519]: time="2025-09-16T04:38:48.188357196Z" level=info msg="StartContainer for \"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\" returns successfully" Sep 16 04:38:48.719043 systemd[1]: cri-containerd-47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803.scope: Deactivated successfully. Sep 16 04:38:48.719815 systemd[1]: cri-containerd-47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803.scope: Consumed 461ms CPU time, 180.7M memory peak, 3.1M read from disk, 165.8M written to disk. Sep 16 04:38:48.721924 containerd[1519]: time="2025-09-16T04:38:48.721867994Z" level=info msg="received exit event container_id:\"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\" id:\"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\" pid:3413 exited_at:{seconds:1757997528 nanos:721638401}" Sep 16 04:38:48.722381 containerd[1519]: time="2025-09-16T04:38:48.722348458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\" id:\"47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803\" pid:3413 exited_at:{seconds:1757997528 nanos:721638401}" Sep 16 04:38:48.741713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47dc52b9caf1e46cb07f7e9c2a43148b7a6e34f26dfcdb8e68cd916671180803-rootfs.mount: Deactivated successfully. Sep 16 04:38:48.810636 kubelet[2660]: I0916 04:38:48.810450 2660 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:38:48.860499 systemd[1]: Created slice kubepods-burstable-pod345a3988_a1fa_40e8_a61e_cb58d51f3859.slice - libcontainer container kubepods-burstable-pod345a3988_a1fa_40e8_a61e_cb58d51f3859.slice. Sep 16 04:38:48.871273 kubelet[2660]: I0916 04:38:48.871217 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br4x\" (UniqueName: \"kubernetes.io/projected/10063a3f-440a-47f3-879b-39e52a324bb6-kube-api-access-7br4x\") pod \"calico-apiserver-598d56f686-td8sm\" (UID: \"10063a3f-440a-47f3-879b-39e52a324bb6\") " pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" Sep 16 04:38:48.871734 kubelet[2660]: I0916 04:38:48.871279 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c89780b-17c6-45c4-84ba-46926b8f574a-calico-apiserver-certs\") pod \"calico-apiserver-598d56f686-bzhph\" (UID: \"1c89780b-17c6-45c4-84ba-46926b8f574a\") " pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" Sep 16 04:38:48.871734 kubelet[2660]: I0916 04:38:48.871313 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/345a3988-a1fa-40e8-a61e-cb58d51f3859-config-volume\") pod \"coredns-7c65d6cfc9-jjvtr\" (UID: \"345a3988-a1fa-40e8-a61e-cb58d51f3859\") " pod="kube-system/coredns-7c65d6cfc9-jjvtr" Sep 16 04:38:48.871734 kubelet[2660]: I0916 04:38:48.871332 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10063a3f-440a-47f3-879b-39e52a324bb6-calico-apiserver-certs\") pod \"calico-apiserver-598d56f686-td8sm\" (UID: \"10063a3f-440a-47f3-879b-39e52a324bb6\") " pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" Sep 16 04:38:48.872090 kubelet[2660]: I0916 04:38:48.871370 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgbk\" (UniqueName: \"kubernetes.io/projected/345a3988-a1fa-40e8-a61e-cb58d51f3859-kube-api-access-nfgbk\") pod \"coredns-7c65d6cfc9-jjvtr\" (UID: \"345a3988-a1fa-40e8-a61e-cb58d51f3859\") " pod="kube-system/coredns-7c65d6cfc9-jjvtr" Sep 16 04:38:48.872142 kubelet[2660]: I0916 04:38:48.872115 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgx6\" (UniqueName: \"kubernetes.io/projected/1c89780b-17c6-45c4-84ba-46926b8f574a-kube-api-access-rrgx6\") pod \"calico-apiserver-598d56f686-bzhph\" (UID: \"1c89780b-17c6-45c4-84ba-46926b8f574a\") " pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" Sep 16 04:38:48.878307 systemd[1]: Created slice kubepods-besteffort-pod3bf55d73_1b48_4bbc_9036_dc58b5a86afe.slice - libcontainer container kubepods-besteffort-pod3bf55d73_1b48_4bbc_9036_dc58b5a86afe.slice. Sep 16 04:38:48.889468 systemd[1]: Created slice kubepods-besteffort-pod10063a3f_440a_47f3_879b_39e52a324bb6.slice - libcontainer container kubepods-besteffort-pod10063a3f_440a_47f3_879b_39e52a324bb6.slice. Sep 16 04:38:48.897610 systemd[1]: Created slice kubepods-besteffort-pod1c89780b_17c6_45c4_84ba_46926b8f574a.slice - libcontainer container kubepods-besteffort-pod1c89780b_17c6_45c4_84ba_46926b8f574a.slice. Sep 16 04:38:48.904832 systemd[1]: Created slice kubepods-burstable-pod82ca3e66_8970_4c71_82e7_8c7e79d757de.slice - libcontainer container kubepods-burstable-pod82ca3e66_8970_4c71_82e7_8c7e79d757de.slice. Sep 16 04:38:48.910618 systemd[1]: Created slice kubepods-besteffort-pod32497091_782d_420f_ae11_8538d9e76009.slice - libcontainer container kubepods-besteffort-pod32497091_782d_420f_ae11_8538d9e76009.slice. Sep 16 04:38:48.917360 systemd[1]: Created slice kubepods-besteffort-pod6e9b6363_f232_4579_863a_13e71d23df04.slice - libcontainer container kubepods-besteffort-pod6e9b6363_f232_4579_863a_13e71d23df04.slice. Sep 16 04:38:48.972587 kubelet[2660]: I0916 04:38:48.972369 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32497091-782d-420f-ae11-8538d9e76009-config\") pod \"goldmane-7988f88666-nxklv\" (UID: \"32497091-782d-420f-ae11-8538d9e76009\") " pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:48.972587 kubelet[2660]: I0916 04:38:48.972422 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32497091-782d-420f-ae11-8538d9e76009-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nxklv\" (UID: \"32497091-782d-420f-ae11-8538d9e76009\") " pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:48.972587 kubelet[2660]: I0916 04:38:48.972457 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7tbb\" (UniqueName: \"kubernetes.io/projected/32497091-782d-420f-ae11-8538d9e76009-kube-api-access-d7tbb\") pod \"goldmane-7988f88666-nxklv\" (UID: \"32497091-782d-420f-ae11-8538d9e76009\") " pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:48.972587 kubelet[2660]: I0916 04:38:48.972484 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms5d\" (UniqueName: \"kubernetes.io/projected/3bf55d73-1b48-4bbc-9036-dc58b5a86afe-kube-api-access-fms5d\") pod \"calico-kube-controllers-86d8c7cc48-j9qwt\" (UID: \"3bf55d73-1b48-4bbc-9036-dc58b5a86afe\") " pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" Sep 16 04:38:48.972587 kubelet[2660]: I0916 04:38:48.972502 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bf55d73-1b48-4bbc-9036-dc58b5a86afe-tigera-ca-bundle\") pod \"calico-kube-controllers-86d8c7cc48-j9qwt\" (UID: \"3bf55d73-1b48-4bbc-9036-dc58b5a86afe\") " pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" Sep 16 04:38:48.973154 kubelet[2660]: I0916 04:38:48.973076 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e9b6363-f232-4579-863a-13e71d23df04-whisker-backend-key-pair\") pod \"whisker-5c58bb5c4-24fkn\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " pod="calico-system/whisker-5c58bb5c4-24fkn" Sep 16 04:38:48.973154 kubelet[2660]: I0916 04:38:48.973138 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/32497091-782d-420f-ae11-8538d9e76009-goldmane-key-pair\") pod \"goldmane-7988f88666-nxklv\" (UID: \"32497091-782d-420f-ae11-8538d9e76009\") " pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:48.973154 kubelet[2660]: I0916 04:38:48.973156 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9sx9\" (UniqueName: \"kubernetes.io/projected/6e9b6363-f232-4579-863a-13e71d23df04-kube-api-access-r9sx9\") pod \"whisker-5c58bb5c4-24fkn\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " pod="calico-system/whisker-5c58bb5c4-24fkn" Sep 16 04:38:48.973261 kubelet[2660]: I0916 04:38:48.973171 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82ca3e66-8970-4c71-82e7-8c7e79d757de-config-volume\") pod \"coredns-7c65d6cfc9-zbk44\" (UID: \"82ca3e66-8970-4c71-82e7-8c7e79d757de\") " pod="kube-system/coredns-7c65d6cfc9-zbk44" Sep 16 04:38:48.974655 kubelet[2660]: I0916 04:38:48.974624 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5k2\" (UniqueName: \"kubernetes.io/projected/82ca3e66-8970-4c71-82e7-8c7e79d757de-kube-api-access-sm5k2\") pod \"coredns-7c65d6cfc9-zbk44\" (UID: \"82ca3e66-8970-4c71-82e7-8c7e79d757de\") " pod="kube-system/coredns-7c65d6cfc9-zbk44" Sep 16 04:38:48.975069 kubelet[2660]: I0916 04:38:48.974672 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9b6363-f232-4579-863a-13e71d23df04-whisker-ca-bundle\") pod \"whisker-5c58bb5c4-24fkn\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " pod="calico-system/whisker-5c58bb5c4-24fkn" Sep 16 04:38:49.172306 kubelet[2660]: E0916 04:38:49.171225 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:49.172440 containerd[1519]: time="2025-09-16T04:38:49.171698851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjvtr,Uid:345a3988-a1fa-40e8-a61e-cb58d51f3859,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:49.173661 systemd[1]: Created slice kubepods-besteffort-pod21b07629_5d22_4793_9ce0_70f06e5a1f49.slice - libcontainer container kubepods-besteffort-pod21b07629_5d22_4793_9ce0_70f06e5a1f49.slice. Sep 16 04:38:49.176870 containerd[1519]: time="2025-09-16T04:38:49.176838726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xz6q,Uid:21b07629-5d22-4793-9ce0-70f06e5a1f49,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:49.198124 containerd[1519]: time="2025-09-16T04:38:49.197838016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-td8sm,Uid:10063a3f-440a-47f3-879b-39e52a324bb6,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:38:49.198124 containerd[1519]: time="2025-09-16T04:38:49.197843536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d8c7cc48-j9qwt,Uid:3bf55d73-1b48-4bbc-9036-dc58b5a86afe,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:49.202604 containerd[1519]: time="2025-09-16T04:38:49.201509779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-bzhph,Uid:1c89780b-17c6-45c4-84ba-46926b8f574a,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:38:49.208818 kubelet[2660]: E0916 04:38:49.208409 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:38:49.209421 containerd[1519]: time="2025-09-16T04:38:49.209114776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zbk44,Uid:82ca3e66-8970-4c71-82e7-8c7e79d757de,Namespace:kube-system,Attempt:0,}" Sep 16 04:38:49.215409 containerd[1519]: time="2025-09-16T04:38:49.215374216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nxklv,Uid:32497091-782d-420f-ae11-8538d9e76009,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:49.225447 containerd[1519]: time="2025-09-16T04:38:49.225336978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c58bb5c4-24fkn,Uid:6e9b6363-f232-4579-863a-13e71d23df04,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:49.256979 containerd[1519]: time="2025-09-16T04:38:49.256535062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:38:49.301588 containerd[1519]: time="2025-09-16T04:38:49.300534218Z" level=error msg="Failed to destroy network for sandbox \"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.304195 containerd[1519]: time="2025-09-16T04:38:49.304136703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-td8sm,Uid:10063a3f-440a-47f3-879b-39e52a324bb6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.305820 kubelet[2660]: E0916 04:38:49.305774 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.305964 kubelet[2660]: E0916 04:38:49.305945 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" Sep 16 04:38:49.306032 kubelet[2660]: E0916 04:38:49.306018 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" Sep 16 04:38:49.306170 kubelet[2660]: E0916 04:38:49.306134 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598d56f686-td8sm_calico-apiserver(10063a3f-440a-47f3-879b-39e52a324bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598d56f686-td8sm_calico-apiserver(10063a3f-440a-47f3-879b-39e52a324bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b734fc9f3aaf245d5ed490baba1ba2be9ea9df924d41133b6e687d801e5950d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" podUID="10063a3f-440a-47f3-879b-39e52a324bb6" Sep 16 04:38:49.317126 containerd[1519]: time="2025-09-16T04:38:49.317076570Z" level=error msg="Failed to destroy network for sandbox \"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.318085 containerd[1519]: time="2025-09-16T04:38:49.318033499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjvtr,Uid:345a3988-a1fa-40e8-a61e-cb58d51f3859,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.318291 kubelet[2660]: E0916 04:38:49.318253 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.318338 kubelet[2660]: E0916 04:38:49.318305 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jjvtr" Sep 16 04:38:49.318372 kubelet[2660]: E0916 04:38:49.318323 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-jjvtr" Sep 16 04:38:49.318406 kubelet[2660]: E0916 04:38:49.318381 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-jjvtr_kube-system(345a3988-a1fa-40e8-a61e-cb58d51f3859)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-jjvtr_kube-system(345a3988-a1fa-40e8-a61e-cb58d51f3859)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b14428ffca3696dd17c46d745b1f4a535f9f339ffbfaf9ac56e1b0a1f9ec785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-jjvtr" podUID="345a3988-a1fa-40e8-a61e-cb58d51f3859" Sep 16 04:38:49.319855 containerd[1519]: time="2025-09-16T04:38:49.319389496Z" level=error msg="Failed to destroy network for sandbox \"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.321834 containerd[1519]: time="2025-09-16T04:38:49.321787499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nxklv,Uid:32497091-782d-420f-ae11-8538d9e76009,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.322002 kubelet[2660]: E0916 04:38:49.321968 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.322052 kubelet[2660]: E0916 04:38:49.322009 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:49.322052 kubelet[2660]: E0916 04:38:49.322025 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nxklv" Sep 16 04:38:49.322103 kubelet[2660]: E0916 04:38:49.322070 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nxklv_calico-system(32497091-782d-420f-ae11-8538d9e76009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nxklv_calico-system(32497091-782d-420f-ae11-8538d9e76009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb1ce75e6283d75a9227eaf1e92271feb914450231e6f9420e5801df73c3286b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nxklv" podUID="32497091-782d-420f-ae11-8538d9e76009" Sep 16 04:38:49.333748 containerd[1519]: time="2025-09-16T04:38:49.333703079Z" level=error msg="Failed to destroy network for sandbox \"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.336898 containerd[1519]: time="2025-09-16T04:38:49.336849099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c58bb5c4-24fkn,Uid:6e9b6363-f232-4579-863a-13e71d23df04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.337021 containerd[1519]: time="2025-09-16T04:38:49.336969775Z" level=error msg="Failed to destroy network for sandbox \"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.337103 kubelet[2660]: E0916 04:38:49.337070 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.337151 kubelet[2660]: E0916 04:38:49.337129 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c58bb5c4-24fkn" Sep 16 04:38:49.337183 kubelet[2660]: E0916 04:38:49.337148 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c58bb5c4-24fkn" Sep 16 04:38:49.337208 kubelet[2660]: E0916 04:38:49.337190 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c58bb5c4-24fkn_calico-system(6e9b6363-f232-4579-863a-13e71d23df04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c58bb5c4-24fkn_calico-system(6e9b6363-f232-4579-863a-13e71d23df04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61dcd012dc6da0d7987ee8db157273ec68ac805738b814d6b90ca5026ccd55b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c58bb5c4-24fkn" podUID="6e9b6363-f232-4579-863a-13e71d23df04" Sep 16 04:38:49.339590 containerd[1519]: time="2025-09-16T04:38:49.339528533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zbk44,Uid:82ca3e66-8970-4c71-82e7-8c7e79d757de,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.339784 kubelet[2660]: E0916 04:38:49.339755 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.339823 kubelet[2660]: E0916 04:38:49.339803 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zbk44" Sep 16 04:38:49.339859 kubelet[2660]: E0916 04:38:49.339820 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zbk44" Sep 16 04:38:49.339885 kubelet[2660]: E0916 04:38:49.339854 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zbk44_kube-system(82ca3e66-8970-4c71-82e7-8c7e79d757de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zbk44_kube-system(82ca3e66-8970-4c71-82e7-8c7e79d757de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"172993999642d878841589b578d9cb1ea31e2c32929b81d71e898f9f0bf7d949\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zbk44" podUID="82ca3e66-8970-4c71-82e7-8c7e79d757de" Sep 16 04:38:49.346703 containerd[1519]: time="2025-09-16T04:38:49.346655186Z" level=error msg="Failed to destroy network for sandbox \"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.348554 containerd[1519]: time="2025-09-16T04:38:49.348482247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xz6q,Uid:21b07629-5d22-4793-9ce0-70f06e5a1f49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.348722 kubelet[2660]: E0916 04:38:49.348695 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.348787 kubelet[2660]: E0916 04:38:49.348740 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:49.348787 kubelet[2660]: E0916 04:38:49.348756 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xz6q" Sep 16 04:38:49.348833 kubelet[2660]: E0916 04:38:49.348789 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2xz6q_calico-system(21b07629-5d22-4793-9ce0-70f06e5a1f49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2xz6q_calico-system(21b07629-5d22-4793-9ce0-70f06e5a1f49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84cf7ba3366304dfa64edffbd1a31269e7dbd187cc7ce8045d2a9105e239678f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2xz6q" podUID="21b07629-5d22-4793-9ce0-70f06e5a1f49" Sep 16 04:38:49.354341 containerd[1519]: time="2025-09-16T04:38:49.354294942Z" level=error msg="Failed to destroy network for sandbox \"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.355958 containerd[1519]: time="2025-09-16T04:38:49.355893171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d8c7cc48-j9qwt,Uid:3bf55d73-1b48-4bbc-9036-dc58b5a86afe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.356178 kubelet[2660]: E0916 04:38:49.356150 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.356375 kubelet[2660]: E0916 04:38:49.356282 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" Sep 16 04:38:49.356375 kubelet[2660]: E0916 04:38:49.356304 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" Sep 16 04:38:49.356375 kubelet[2660]: E0916 04:38:49.356344 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d8c7cc48-j9qwt_calico-system(3bf55d73-1b48-4bbc-9036-dc58b5a86afe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d8c7cc48-j9qwt_calico-system(3bf55d73-1b48-4bbc-9036-dc58b5a86afe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13d3f21b3a679da4889cf830d8eb07ec203ba97ed9370434d918067b3b0cb495\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" podUID="3bf55d73-1b48-4bbc-9036-dc58b5a86afe" Sep 16 04:38:49.361546 containerd[1519]: time="2025-09-16T04:38:49.361470353Z" level=error msg="Failed to destroy network for sandbox \"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.362400 containerd[1519]: time="2025-09-16T04:38:49.362367364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-bzhph,Uid:1c89780b-17c6-45c4-84ba-46926b8f574a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.362671 kubelet[2660]: E0916 04:38:49.362642 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:38:49.362734 kubelet[2660]: E0916 04:38:49.362684 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" Sep 16 04:38:49.362734 kubelet[2660]: E0916 04:38:49.362700 2660 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" Sep 16 04:38:49.362778 kubelet[2660]: E0916 04:38:49.362730 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-598d56f686-bzhph_calico-apiserver(1c89780b-17c6-45c4-84ba-46926b8f574a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-598d56f686-bzhph_calico-apiserver(1c89780b-17c6-45c4-84ba-46926b8f574a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e04c9703d858a987a5caa9e0055aab8268846d4174828eeb8578701a72891d73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" podUID="1c89780b-17c6-45c4-84ba-46926b8f574a" Sep 16 04:38:50.105269 systemd[1]: run-netns-cni\x2de96dd0b9\x2dde83\x2dd268\x2d7e07\x2da4db249d574c.mount: Deactivated successfully. Sep 16 04:38:50.106357 systemd[1]: run-netns-cni\x2d6ebf5f50\x2d2e97\x2d2e34\x2d2cd0\x2d1ce965bd08a0.mount: Deactivated successfully. Sep 16 04:38:50.106414 systemd[1]: run-netns-cni\x2db9248743\x2d2f9f\x2d7db6\x2dd743\x2d6821979750f5.mount: Deactivated successfully. Sep 16 04:38:50.106461 systemd[1]: run-netns-cni\x2d590de4b8\x2d3ebc\x2d9631\x2db5f9\x2d18752f5ecc33.mount: Deactivated successfully. Sep 16 04:38:52.231174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20443735.mount: Deactivated successfully. Sep 16 04:38:52.536333 containerd[1519]: time="2025-09-16T04:38:52.536234999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 16 04:38:52.539635 containerd[1519]: time="2025-09-16T04:38:52.539608584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.283026163s" Sep 16 04:38:52.539746 containerd[1519]: time="2025-09-16T04:38:52.539639383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 16 04:38:52.539828 containerd[1519]: time="2025-09-16T04:38:52.539767260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:52.553301 containerd[1519]: time="2025-09-16T04:38:52.553256998Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:52.554999 containerd[1519]: time="2025-09-16T04:38:52.554173572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:52.563589 containerd[1519]: time="2025-09-16T04:38:52.563489948Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:38:52.576028 containerd[1519]: time="2025-09-16T04:38:52.575688483Z" level=info msg="Container d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:52.592366 containerd[1519]: time="2025-09-16T04:38:52.592321892Z" level=info msg="CreateContainer within sandbox \"58d3df0fb0b1c3e621cbdde812496cf133f23819ef7bb46fec0b78d6625f740b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\"" Sep 16 04:38:52.593181 containerd[1519]: time="2025-09-16T04:38:52.592967594Z" level=info msg="StartContainer for \"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\"" Sep 16 04:38:52.594737 containerd[1519]: time="2025-09-16T04:38:52.594710985Z" level=info msg="connecting to shim d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a" address="unix:///run/containerd/s/d901dd08d2b04d3f9d9ac6d2a44448764032575d1d5f0b9051bb73e789df51db" protocol=ttrpc version=3 Sep 16 04:38:52.618733 systemd[1]: Started cri-containerd-d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a.scope - libcontainer container d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a. Sep 16 04:38:52.660788 containerd[1519]: time="2025-09-16T04:38:52.660741236Z" level=info msg="StartContainer for \"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\" returns successfully" Sep 16 04:38:52.782893 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:38:52.782989 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:38:53.002699 kubelet[2660]: I0916 04:38:53.002592 2660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9sx9\" (UniqueName: \"kubernetes.io/projected/6e9b6363-f232-4579-863a-13e71d23df04-kube-api-access-r9sx9\") pod \"6e9b6363-f232-4579-863a-13e71d23df04\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " Sep 16 04:38:53.002699 kubelet[2660]: I0916 04:38:53.002640 2660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9b6363-f232-4579-863a-13e71d23df04-whisker-ca-bundle\") pod \"6e9b6363-f232-4579-863a-13e71d23df04\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " Sep 16 04:38:53.002699 kubelet[2660]: I0916 04:38:53.002664 2660 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e9b6363-f232-4579-863a-13e71d23df04-whisker-backend-key-pair\") pod \"6e9b6363-f232-4579-863a-13e71d23df04\" (UID: \"6e9b6363-f232-4579-863a-13e71d23df04\") " Sep 16 04:38:53.010177 kubelet[2660]: I0916 04:38:53.010133 2660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b6363-f232-4579-863a-13e71d23df04-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6e9b6363-f232-4579-863a-13e71d23df04" (UID: "6e9b6363-f232-4579-863a-13e71d23df04"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 04:38:53.010501 kubelet[2660]: I0916 04:38:53.010467 2660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9b6363-f232-4579-863a-13e71d23df04-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6e9b6363-f232-4579-863a-13e71d23df04" (UID: "6e9b6363-f232-4579-863a-13e71d23df04"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:38:53.010804 kubelet[2660]: I0916 04:38:53.010773 2660 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b6363-f232-4579-863a-13e71d23df04-kube-api-access-r9sx9" (OuterVolumeSpecName: "kube-api-access-r9sx9") pod "6e9b6363-f232-4579-863a-13e71d23df04" (UID: "6e9b6363-f232-4579-863a-13e71d23df04"). InnerVolumeSpecName "kube-api-access-r9sx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:38:53.103294 kubelet[2660]: I0916 04:38:53.103250 2660 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9sx9\" (UniqueName: \"kubernetes.io/projected/6e9b6363-f232-4579-863a-13e71d23df04-kube-api-access-r9sx9\") on node \"localhost\" DevicePath \"\"" Sep 16 04:38:53.103294 kubelet[2660]: I0916 04:38:53.103281 2660 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9b6363-f232-4579-863a-13e71d23df04-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 16 04:38:53.103294 kubelet[2660]: I0916 04:38:53.103290 2660 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e9b6363-f232-4579-863a-13e71d23df04-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 16 04:38:53.232043 systemd[1]: var-lib-kubelet-pods-6e9b6363\x2df232\x2d4579\x2d863a\x2d13e71d23df04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9sx9.mount: Deactivated successfully. Sep 16 04:38:53.232145 systemd[1]: var-lib-kubelet-pods-6e9b6363\x2df232\x2d4579\x2d863a\x2d13e71d23df04-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:38:53.278753 systemd[1]: Removed slice kubepods-besteffort-pod6e9b6363_f232_4579_863a_13e71d23df04.slice - libcontainer container kubepods-besteffort-pod6e9b6363_f232_4579_863a_13e71d23df04.slice. Sep 16 04:38:53.318195 kubelet[2660]: I0916 04:38:53.317380 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2bbmv" podStartSLOduration=1.8665191330000002 podStartE2EDuration="12.317364268s" podCreationTimestamp="2025-09-16 04:38:41 +0000 UTC" firstStartedPulling="2025-09-16 04:38:42.102700095 +0000 UTC m=+20.017495781" lastFinishedPulling="2025-09-16 04:38:52.55354523 +0000 UTC m=+30.468340916" observedRunningTime="2025-09-16 04:38:53.316122622 +0000 UTC m=+31.230918388" watchObservedRunningTime="2025-09-16 04:38:53.317364268 +0000 UTC m=+31.232159954" Sep 16 04:38:53.338981 systemd[1]: Created slice kubepods-besteffort-podd9538160_e62b_4519_acad_9bab36690ac0.slice - libcontainer container kubepods-besteffort-podd9538160_e62b_4519_acad_9bab36690ac0.slice. Sep 16 04:38:53.404844 kubelet[2660]: I0916 04:38:53.404763 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9538160-e62b-4519-acad-9bab36690ac0-whisker-ca-bundle\") pod \"whisker-54547dddf-vhlr4\" (UID: \"d9538160-e62b-4519-acad-9bab36690ac0\") " pod="calico-system/whisker-54547dddf-vhlr4" Sep 16 04:38:53.404844 kubelet[2660]: I0916 04:38:53.404819 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9538160-e62b-4519-acad-9bab36690ac0-whisker-backend-key-pair\") pod \"whisker-54547dddf-vhlr4\" (UID: \"d9538160-e62b-4519-acad-9bab36690ac0\") " pod="calico-system/whisker-54547dddf-vhlr4" Sep 16 04:38:53.404844 kubelet[2660]: I0916 04:38:53.404839 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bbps\" (UniqueName: \"kubernetes.io/projected/d9538160-e62b-4519-acad-9bab36690ac0-kube-api-access-4bbps\") pod \"whisker-54547dddf-vhlr4\" (UID: \"d9538160-e62b-4519-acad-9bab36690ac0\") " pod="calico-system/whisker-54547dddf-vhlr4" Sep 16 04:38:53.645204 containerd[1519]: time="2025-09-16T04:38:53.645084780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54547dddf-vhlr4,Uid:d9538160-e62b-4519-acad-9bab36690ac0,Namespace:calico-system,Attempt:0,}" Sep 16 04:38:53.823303 systemd-networkd[1422]: cali31e99b41cb4: Link UP Sep 16 04:38:53.824142 systemd-networkd[1422]: cali31e99b41cb4: Gained carrier Sep 16 04:38:53.835969 containerd[1519]: 2025-09-16 04:38:53.682 [INFO][3793] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:38:53.835969 containerd[1519]: 2025-09-16 04:38:53.713 [INFO][3793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54547dddf--vhlr4-eth0 whisker-54547dddf- calico-system d9538160-e62b-4519-acad-9bab36690ac0 915 0 2025-09-16 04:38:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54547dddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54547dddf-vhlr4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali31e99b41cb4 [] [] }} ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-" Sep 16 04:38:53.835969 containerd[1519]: 2025-09-16 04:38:53.713 [INFO][3793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.835969 containerd[1519]: 2025-09-16 04:38:53.778 [INFO][3808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" HandleID="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Workload="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.779 [INFO][3808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" HandleID="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Workload="localhost-k8s-whisker--54547dddf--vhlr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011d2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54547dddf-vhlr4", "timestamp":"2025-09-16 04:38:53.778814297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.779 [INFO][3808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.779 [INFO][3808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.779 [INFO][3808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.789 [INFO][3808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" host="localhost" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.795 [INFO][3808] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.799 [INFO][3808] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.800 [INFO][3808] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.803 [INFO][3808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:38:53.837314 containerd[1519]: 2025-09-16 04:38:53.803 [INFO][3808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" host="localhost" Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.805 [INFO][3808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5 Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.809 [INFO][3808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" host="localhost" Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.813 [INFO][3808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" host="localhost" Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.813 [INFO][3808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" host="localhost" Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.813 [INFO][3808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:38:53.837568 containerd[1519]: 2025-09-16 04:38:53.813 [INFO][3808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" HandleID="k8s-pod-network.8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Workload="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.837680 containerd[1519]: 2025-09-16 04:38:53.816 [INFO][3793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54547dddf--vhlr4-eth0", GenerateName:"whisker-54547dddf-", Namespace:"calico-system", SelfLink:"", UID:"d9538160-e62b-4519-acad-9bab36690ac0", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54547dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54547dddf-vhlr4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali31e99b41cb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:38:53.837680 containerd[1519]: 2025-09-16 04:38:53.816 [INFO][3793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.837751 containerd[1519]: 2025-09-16 04:38:53.816 [INFO][3793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31e99b41cb4 ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.837751 containerd[1519]: 2025-09-16 04:38:53.824 [INFO][3793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.837788 containerd[1519]: 2025-09-16 04:38:53.825 [INFO][3793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54547dddf--vhlr4-eth0", GenerateName:"whisker-54547dddf-", Namespace:"calico-system", SelfLink:"", UID:"d9538160-e62b-4519-acad-9bab36690ac0", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54547dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5", Pod:"whisker-54547dddf-vhlr4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali31e99b41cb4", MAC:"c2:fb:f5:f9:fc:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:38:53.837841 containerd[1519]: 2025-09-16 04:38:53.833 [INFO][3793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" Namespace="calico-system" Pod="whisker-54547dddf-vhlr4" WorkloadEndpoint="localhost-k8s-whisker--54547dddf--vhlr4-eth0" Sep 16 04:38:53.900695 containerd[1519]: time="2025-09-16T04:38:53.900601420Z" level=info msg="connecting to shim 8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5" address="unix:///run/containerd/s/5c3f402b41c89d2f9cdc2cbceb2268321bc6e3d7c39b207a3c3147e5a85ffaff" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:38:53.922744 systemd[1]: Started cri-containerd-8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5.scope - libcontainer container 8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5. Sep 16 04:38:53.933044 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:38:53.953494 containerd[1519]: time="2025-09-16T04:38:53.953448220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54547dddf-vhlr4,Uid:d9538160-e62b-4519-acad-9bab36690ac0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5\"" Sep 16 04:38:53.957156 containerd[1519]: time="2025-09-16T04:38:53.956943285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:38:54.174961 kubelet[2660]: I0916 04:38:54.174849 2660 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b6363-f232-4579-863a-13e71d23df04" path="/var/lib/kubelet/pods/6e9b6363-f232-4579-863a-13e71d23df04/volumes" Sep 16 04:38:54.276227 kubelet[2660]: I0916 04:38:54.276186 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:38:55.118450 containerd[1519]: time="2025-09-16T04:38:55.118405584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\" id:\"9843ddca3ab9a0b558f3375d96cda3223c78b91a4f9c754ad0adbc25c3513958\" pid:3980 exit_status:1 exited_at:{seconds:1757997535 nanos:118124631}" Sep 16 04:38:55.192480 containerd[1519]: time="2025-09-16T04:38:55.192398711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\" id:\"c3f890a5213fac55d99119f789858483022d8da91a1244d35b197aa3b20d9e43\" pid:4006 exit_status:1 exited_at:{seconds:1757997535 nanos:192091279}" Sep 16 04:38:55.239433 systemd-networkd[1422]: cali31e99b41cb4: Gained IPv6LL Sep 16 04:38:55.365414 containerd[1519]: time="2025-09-16T04:38:55.365378012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\" id:\"ead1cc067bb4c06a34cb64c16fc16484262d2c6880ba1b8d6e760bdcdd824892\" pid:4053 exit_status:1 exited_at:{seconds:1757997535 nanos:365124018}" Sep 16 04:38:55.404420 containerd[1519]: time="2025-09-16T04:38:55.404307146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:55.405050 containerd[1519]: time="2025-09-16T04:38:55.405014688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 16 04:38:55.405747 containerd[1519]: time="2025-09-16T04:38:55.405722350Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:55.407921 containerd[1519]: time="2025-09-16T04:38:55.407895935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:55.408588 containerd[1519]: time="2025-09-16T04:38:55.408556879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.451578715s" Sep 16 04:38:55.408625 containerd[1519]: time="2025-09-16T04:38:55.408592438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 16 04:38:55.411885 containerd[1519]: time="2025-09-16T04:38:55.411837196Z" level=info msg="CreateContainer within sandbox \"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:38:55.417706 containerd[1519]: time="2025-09-16T04:38:55.417672088Z" level=info msg="Container a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:55.423364 containerd[1519]: time="2025-09-16T04:38:55.423321785Z" level=info msg="CreateContainer within sandbox \"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d\"" Sep 16 04:38:55.424051 containerd[1519]: time="2025-09-16T04:38:55.424024287Z" level=info msg="StartContainer for \"a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d\"" Sep 16 04:38:55.425207 containerd[1519]: time="2025-09-16T04:38:55.425177138Z" level=info msg="connecting to shim a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d" address="unix:///run/containerd/s/5c3f402b41c89d2f9cdc2cbceb2268321bc6e3d7c39b207a3c3147e5a85ffaff" protocol=ttrpc version=3 Sep 16 04:38:55.445782 systemd[1]: Started cri-containerd-a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d.scope - libcontainer container a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d. Sep 16 04:38:55.481203 containerd[1519]: time="2025-09-16T04:38:55.481159361Z" level=info msg="StartContainer for \"a5e5bf6df022a8ab0a3eb313c45a2de64bc31d69d4c7c173e4db22ca7104dd0d\" returns successfully" Sep 16 04:38:55.482889 containerd[1519]: time="2025-09-16T04:38:55.482827398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:38:56.990740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983428923.mount: Deactivated successfully. Sep 16 04:38:57.003932 containerd[1519]: time="2025-09-16T04:38:57.003877365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:57.004347 containerd[1519]: time="2025-09-16T04:38:57.004289915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 16 04:38:57.005398 containerd[1519]: time="2025-09-16T04:38:57.005053137Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:57.007087 containerd[1519]: time="2025-09-16T04:38:57.007050730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:38:57.007865 containerd[1519]: time="2025-09-16T04:38:57.007833271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.524960074s" Sep 16 04:38:57.007964 containerd[1519]: time="2025-09-16T04:38:57.007949108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 16 04:38:57.011760 containerd[1519]: time="2025-09-16T04:38:57.011727579Z" level=info msg="CreateContainer within sandbox \"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:38:57.019724 containerd[1519]: time="2025-09-16T04:38:57.019693871Z" level=info msg="Container 1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:38:57.026815 containerd[1519]: time="2025-09-16T04:38:57.026754984Z" level=info msg="CreateContainer within sandbox \"8bb7bf98d4e2ef6c9c66d004be3e49afdd1edb7e46c339ce84e0b58e113afdb5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d\"" Sep 16 04:38:57.027543 containerd[1519]: time="2025-09-16T04:38:57.027444528Z" level=info msg="StartContainer for \"1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d\"" Sep 16 04:38:57.028679 containerd[1519]: time="2025-09-16T04:38:57.028620340Z" level=info msg="connecting to shim 1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d" address="unix:///run/containerd/s/5c3f402b41c89d2f9cdc2cbceb2268321bc6e3d7c39b207a3c3147e5a85ffaff" protocol=ttrpc version=3 Sep 16 04:38:57.056671 systemd[1]: Started cri-containerd-1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d.scope - libcontainer container 1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d. Sep 16 04:38:57.090724 containerd[1519]: time="2025-09-16T04:38:57.090675354Z" level=info msg="StartContainer for \"1126f9fd6454a874823995454a0bb911d29490895b1b5fd6ddcf4e1a082f1f4d\" returns successfully" Sep 16 04:38:57.333111 kubelet[2660]: I0916 04:38:57.332996 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54547dddf-vhlr4" podStartSLOduration=1.278213902 podStartE2EDuration="4.33297899s" podCreationTimestamp="2025-09-16 04:38:53 +0000 UTC" firstStartedPulling="2025-09-16 04:38:53.955228092 +0000 UTC m=+31.870023778" lastFinishedPulling="2025-09-16 04:38:57.00999318 +0000 UTC m=+34.924788866" observedRunningTime="2025-09-16 04:38:57.332410283 +0000 UTC m=+35.247205969" watchObservedRunningTime="2025-09-16 04:38:57.33297899 +0000 UTC m=+35.247774676" Sep 16 04:39:00.169672 containerd[1519]: time="2025-09-16T04:39:00.169371713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xz6q,Uid:21b07629-5d22-4793-9ce0-70f06e5a1f49,Namespace:calico-system,Attempt:0,}" Sep 16 04:39:00.342678 systemd-networkd[1422]: cali384a7751caa: Link UP Sep 16 04:39:00.342839 systemd-networkd[1422]: cali384a7751caa: Gained carrier Sep 16 04:39:00.366555 containerd[1519]: 2025-09-16 04:39:00.221 [INFO][4252] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:00.366555 containerd[1519]: 2025-09-16 04:39:00.240 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2xz6q-eth0 csi-node-driver- calico-system 21b07629-5d22-4793-9ce0-70f06e5a1f49 728 0 2025-09-16 04:38:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2xz6q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali384a7751caa [] [] }} ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-" Sep 16 04:39:00.366555 containerd[1519]: 2025-09-16 04:39:00.242 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.366555 containerd[1519]: 2025-09-16 04:39:00.288 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" HandleID="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Workload="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.288 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" HandleID="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Workload="localhost-k8s-csi--node--driver--2xz6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2xz6q", "timestamp":"2025-09-16 04:39:00.288230802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.288 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.288 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.288 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.305 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" host="localhost" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.312 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.318 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.321 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.324 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:00.366830 containerd[1519]: 2025-09-16 04:39:00.324 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" host="localhost" Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.326 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.331 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" host="localhost" Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.336 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" host="localhost" Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.337 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" host="localhost" Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.337 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:00.367057 containerd[1519]: 2025-09-16 04:39:00.337 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" HandleID="k8s-pod-network.e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Workload="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.367196 containerd[1519]: 2025-09-16 04:39:00.340 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2xz6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b07629-5d22-4793-9ce0-70f06e5a1f49", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2xz6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali384a7751caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:00.367253 containerd[1519]: 2025-09-16 04:39:00.340 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.367253 containerd[1519]: 2025-09-16 04:39:00.340 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali384a7751caa ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.367253 containerd[1519]: 2025-09-16 04:39:00.343 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.367318 containerd[1519]: 2025-09-16 04:39:00.343 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2xz6q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b07629-5d22-4793-9ce0-70f06e5a1f49", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace", Pod:"csi-node-driver-2xz6q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali384a7751caa", MAC:"ca:ec:08:7a:1e:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:00.367371 containerd[1519]: 2025-09-16 04:39:00.357 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" Namespace="calico-system" Pod="csi-node-driver-2xz6q" WorkloadEndpoint="localhost-k8s-csi--node--driver--2xz6q-eth0" Sep 16 04:39:00.454259 containerd[1519]: time="2025-09-16T04:39:00.454135361Z" level=info msg="connecting to shim e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace" address="unix:///run/containerd/s/fd7468a294d551ca6d3470d48e2dc8af4c5765656814ded014673656aed1c9ec" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:00.489789 systemd[1]: Started cri-containerd-e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace.scope - libcontainer container e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace. Sep 16 04:39:00.500650 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:00.513430 containerd[1519]: time="2025-09-16T04:39:00.513384649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xz6q,Uid:21b07629-5d22-4793-9ce0-70f06e5a1f49,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace\"" Sep 16 04:39:00.514885 containerd[1519]: time="2025-09-16T04:39:00.514791819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:39:01.169078 kubelet[2660]: E0916 04:39:01.169036 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:01.169728 containerd[1519]: time="2025-09-16T04:39:01.169678430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zbk44,Uid:82ca3e66-8970-4c71-82e7-8c7e79d757de,Namespace:kube-system,Attempt:0,}" Sep 16 04:39:01.276236 systemd-networkd[1422]: cali2fd75a55d13: Link UP Sep 16 04:39:01.276673 systemd-networkd[1422]: cali2fd75a55d13: Gained carrier Sep 16 04:39:01.290444 containerd[1519]: 2025-09-16 04:39:01.192 [INFO][4355] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:01.290444 containerd[1519]: 2025-09-16 04:39:01.207 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0 coredns-7c65d6cfc9- kube-system 82ca3e66-8970-4c71-82e7-8c7e79d757de 847 0 2025-09-16 04:38:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zbk44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fd75a55d13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-" Sep 16 04:39:01.290444 containerd[1519]: 2025-09-16 04:39:01.207 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.290444 containerd[1519]: 2025-09-16 04:39:01.233 [INFO][4370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" HandleID="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Workload="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.233 [INFO][4370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" HandleID="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Workload="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zbk44", "timestamp":"2025-09-16 04:39:01.233826613 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.234 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.234 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.234 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.243 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" host="localhost" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.250 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.254 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.256 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.258 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:01.290788 containerd[1519]: 2025-09-16 04:39:01.258 [INFO][4370] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" host="localhost" Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.260 [INFO][4370] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49 Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.263 [INFO][4370] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" host="localhost" Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.270 [INFO][4370] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" host="localhost" Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.270 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" host="localhost" Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.270 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:01.291078 containerd[1519]: 2025-09-16 04:39:01.270 [INFO][4370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" HandleID="k8s-pod-network.11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Workload="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.291195 containerd[1519]: 2025-09-16 04:39:01.274 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"82ca3e66-8970-4c71-82e7-8c7e79d757de", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zbk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd75a55d13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:01.291280 containerd[1519]: 2025-09-16 04:39:01.274 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.291280 containerd[1519]: 2025-09-16 04:39:01.274 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fd75a55d13 ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.291280 containerd[1519]: 2025-09-16 04:39:01.276 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.291360 containerd[1519]: 2025-09-16 04:39:01.277 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"82ca3e66-8970-4c71-82e7-8c7e79d757de", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49", Pod:"coredns-7c65d6cfc9-zbk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fd75a55d13", MAC:"2e:27:04:43:16:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:01.291360 containerd[1519]: 2025-09-16 04:39:01.287 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zbk44" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zbk44-eth0" Sep 16 04:39:01.323769 containerd[1519]: time="2025-09-16T04:39:01.323722261Z" level=info msg="connecting to shim 11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49" address="unix:///run/containerd/s/1fa204bc15ca944833178d0e68c87546a3fda2891a0682fc14eedf190426c8e5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:01.354692 systemd[1]: Started cri-containerd-11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49.scope - libcontainer container 11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49. Sep 16 04:39:01.372614 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:01.400992 containerd[1519]: time="2025-09-16T04:39:01.400939132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zbk44,Uid:82ca3e66-8970-4c71-82e7-8c7e79d757de,Namespace:kube-system,Attempt:0,} returns sandbox id \"11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49\"" Sep 16 04:39:01.401665 kubelet[2660]: E0916 04:39:01.401639 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:01.404438 containerd[1519]: time="2025-09-16T04:39:01.404174625Z" level=info msg="CreateContainer within sandbox \"11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:39:01.429174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128957512.mount: Deactivated successfully. Sep 16 04:39:01.429856 containerd[1519]: time="2025-09-16T04:39:01.429773171Z" level=info msg="Container 6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:01.433046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463100582.mount: Deactivated successfully. Sep 16 04:39:01.437296 containerd[1519]: time="2025-09-16T04:39:01.437229136Z" level=info msg="CreateContainer within sandbox \"11183df7b9a278f28b88ab73b25ea8f0cf4878c8612cb2562420929c217a2d49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22\"" Sep 16 04:39:01.437891 containerd[1519]: time="2025-09-16T04:39:01.437747285Z" level=info msg="StartContainer for \"6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22\"" Sep 16 04:39:01.438757 containerd[1519]: time="2025-09-16T04:39:01.438727065Z" level=info msg="connecting to shim 6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22" address="unix:///run/containerd/s/1fa204bc15ca944833178d0e68c87546a3fda2891a0682fc14eedf190426c8e5" protocol=ttrpc version=3 Sep 16 04:39:01.458798 systemd[1]: Started cri-containerd-6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22.scope - libcontainer container 6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22. Sep 16 04:39:01.490552 containerd[1519]: time="2025-09-16T04:39:01.490477427Z" level=info msg="StartContainer for \"6fae834b68f5f1eeb0b8c83b21df1db3e5c5c5358c97d72d0f0dc3e99290fa22\" returns successfully" Sep 16 04:39:01.521356 containerd[1519]: time="2025-09-16T04:39:01.521306465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:01.522092 containerd[1519]: time="2025-09-16T04:39:01.521940452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 16 04:39:01.522922 containerd[1519]: time="2025-09-16T04:39:01.522889392Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:01.525424 containerd[1519]: time="2025-09-16T04:39:01.525391060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:01.526300 containerd[1519]: time="2025-09-16T04:39:01.526180003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.011293626s" Sep 16 04:39:01.526300 containerd[1519]: time="2025-09-16T04:39:01.526212283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 16 04:39:01.528687 containerd[1519]: time="2025-09-16T04:39:01.528639352Z" level=info msg="CreateContainer within sandbox \"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:39:01.537183 containerd[1519]: time="2025-09-16T04:39:01.537148495Z" level=info msg="Container 9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:01.551843 containerd[1519]: time="2025-09-16T04:39:01.551798790Z" level=info msg="CreateContainer within sandbox \"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2\"" Sep 16 04:39:01.552552 containerd[1519]: time="2025-09-16T04:39:01.552397017Z" level=info msg="StartContainer for \"9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2\"" Sep 16 04:39:01.556006 containerd[1519]: time="2025-09-16T04:39:01.555419714Z" level=info msg="connecting to shim 9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2" address="unix:///run/containerd/s/fd7468a294d551ca6d3470d48e2dc8af4c5765656814ded014673656aed1c9ec" protocol=ttrpc version=3 Sep 16 04:39:01.575712 systemd[1]: Started cri-containerd-9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2.scope - libcontainer container 9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2. Sep 16 04:39:01.614753 containerd[1519]: time="2025-09-16T04:39:01.614705799Z" level=info msg="StartContainer for \"9d7d964f0fc40b32321eb1494f3f9d8a9d4c79b68a1ab728095bfb1afe432ae2\" returns successfully" Sep 16 04:39:01.617074 containerd[1519]: time="2025-09-16T04:39:01.617024231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:39:02.175184 kubelet[2660]: E0916 04:39:02.174997 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:02.175620 containerd[1519]: time="2025-09-16T04:39:02.175294224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjvtr,Uid:345a3988-a1fa-40e8-a61e-cb58d51f3859,Namespace:kube-system,Attempt:0,}" Sep 16 04:39:02.271547 systemd-networkd[1422]: calidd033503375: Link UP Sep 16 04:39:02.271778 systemd-networkd[1422]: calidd033503375: Gained carrier Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.198 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.212 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0 coredns-7c65d6cfc9- kube-system 345a3988-a1fa-40e8-a61e-cb58d51f3859 839 0 2025-09-16 04:38:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-jjvtr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd033503375 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.212 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.234 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" HandleID="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Workload="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.235 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" HandleID="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Workload="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-jjvtr", "timestamp":"2025-09-16 04:39:02.234919698 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.235 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.235 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.235 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.243 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.247 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.251 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.254 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.256 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.256 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.257 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15 Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.260 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.266 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.266 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" host="localhost" Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.266 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:02.285179 containerd[1519]: 2025-09-16 04:39:02.266 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" HandleID="k8s-pod-network.3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Workload="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.268 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"345a3988-a1fa-40e8-a61e-cb58d51f3859", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-jjvtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd033503375", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.269 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.269 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd033503375 ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.272 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.273 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"345a3988-a1fa-40e8-a61e-cb58d51f3859", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15", Pod:"coredns-7c65d6cfc9-jjvtr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd033503375", MAC:"92:f1:4d:98:19:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:02.285943 containerd[1519]: 2025-09-16 04:39:02.283 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" Namespace="kube-system" Pod="coredns-7c65d6cfc9-jjvtr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--jjvtr-eth0" Sep 16 04:39:02.301150 containerd[1519]: time="2025-09-16T04:39:02.301108518Z" level=info msg="connecting to shim 3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15" address="unix:///run/containerd/s/78feaa3754ac566f8614b12ecf0ca0c05ef687e6f0d238ab52700a00a73e17f6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:02.332201 systemd[1]: Started cri-containerd-3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15.scope - libcontainer container 3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15. Sep 16 04:39:02.341667 systemd-networkd[1422]: cali384a7751caa: Gained IPv6LL Sep 16 04:39:02.344010 kubelet[2660]: E0916 04:39:02.343963 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:02.348301 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:02.373392 kubelet[2660]: I0916 04:39:02.373337 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zbk44" podStartSLOduration=34.373321136 podStartE2EDuration="34.373321136s" podCreationTimestamp="2025-09-16 04:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:39:02.35882507 +0000 UTC m=+40.273620716" watchObservedRunningTime="2025-09-16 04:39:02.373321136 +0000 UTC m=+40.288116822" Sep 16 04:39:02.391253 containerd[1519]: time="2025-09-16T04:39:02.391172615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jjvtr,Uid:345a3988-a1fa-40e8-a61e-cb58d51f3859,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15\"" Sep 16 04:39:02.392140 kubelet[2660]: E0916 04:39:02.392117 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:02.394651 containerd[1519]: time="2025-09-16T04:39:02.394617185Z" level=info msg="CreateContainer within sandbox \"3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:39:02.408456 containerd[1519]: time="2025-09-16T04:39:02.407864157Z" level=info msg="Container ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:02.414441 containerd[1519]: time="2025-09-16T04:39:02.414400505Z" level=info msg="CreateContainer within sandbox \"3ece8eece449318c036b74fae30e82b3bf023e40e1277bceca93dbff10143a15\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109\"" Sep 16 04:39:02.415309 containerd[1519]: time="2025-09-16T04:39:02.415278367Z" level=info msg="StartContainer for \"ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109\"" Sep 16 04:39:02.416263 containerd[1519]: time="2025-09-16T04:39:02.416234228Z" level=info msg="connecting to shim ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109" address="unix:///run/containerd/s/78feaa3754ac566f8614b12ecf0ca0c05ef687e6f0d238ab52700a00a73e17f6" protocol=ttrpc version=3 Sep 16 04:39:02.438697 systemd[1]: Started cri-containerd-ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109.scope - libcontainer container ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109. Sep 16 04:39:02.477013 containerd[1519]: time="2025-09-16T04:39:02.476756083Z" level=info msg="StartContainer for \"ac7dfe35bf8e8b744465e3a8583f3b6a82a4b8ebf1da59871dea878391f10109\" returns successfully" Sep 16 04:39:02.671994 containerd[1519]: time="2025-09-16T04:39:02.671940333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:02.673549 containerd[1519]: time="2025-09-16T04:39:02.673332465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 16 04:39:02.674491 containerd[1519]: time="2025-09-16T04:39:02.674454842Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:02.676548 containerd[1519]: time="2025-09-16T04:39:02.676482761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:02.677191 containerd[1519]: time="2025-09-16T04:39:02.677163827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.060104797s" Sep 16 04:39:02.677268 containerd[1519]: time="2025-09-16T04:39:02.677254105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 16 04:39:02.680376 containerd[1519]: time="2025-09-16T04:39:02.680337883Z" level=info msg="CreateContainer within sandbox \"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:39:02.687707 containerd[1519]: time="2025-09-16T04:39:02.686656595Z" level=info msg="Container 151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:02.695528 containerd[1519]: time="2025-09-16T04:39:02.695410818Z" level=info msg="CreateContainer within sandbox \"e7a9747f8b7191e451584a5612ee79dfbf774bc0c0e7d83ba194206d4074dace\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548\"" Sep 16 04:39:02.696033 containerd[1519]: time="2025-09-16T04:39:02.695924367Z" level=info msg="StartContainer for \"151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548\"" Sep 16 04:39:02.697595 containerd[1519]: time="2025-09-16T04:39:02.697567534Z" level=info msg="connecting to shim 151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548" address="unix:///run/containerd/s/fd7468a294d551ca6d3470d48e2dc8af4c5765656814ded014673656aed1c9ec" protocol=ttrpc version=3 Sep 16 04:39:02.721738 systemd[1]: Started cri-containerd-151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548.scope - libcontainer container 151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548. Sep 16 04:39:02.792264 containerd[1519]: time="2025-09-16T04:39:02.792228298Z" level=info msg="StartContainer for \"151138dfc6c966978ecabc39d8425a830542837fdd25b5d1514dac4d70b76548\" returns successfully" Sep 16 04:39:02.810402 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:48960.service - OpenSSH per-connection server daemon (10.0.0.1:48960). Sep 16 04:39:02.887131 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 48960 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:02.888512 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:02.893181 systemd-logind[1503]: New session 8 of user core. Sep 16 04:39:02.906653 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:39:02.981965 systemd-networkd[1422]: cali2fd75a55d13: Gained IPv6LL Sep 16 04:39:03.085205 sshd[4701]: Connection closed by 10.0.0.1 port 48960 Sep 16 04:39:03.085881 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:03.089876 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:39:03.090092 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:48960.service: Deactivated successfully. Sep 16 04:39:03.093477 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:39:03.095294 systemd-logind[1503]: Removed session 8. Sep 16 04:39:03.168684 containerd[1519]: time="2025-09-16T04:39:03.168620054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d8c7cc48-j9qwt,Uid:3bf55d73-1b48-4bbc-9036-dc58b5a86afe,Namespace:calico-system,Attempt:0,}" Sep 16 04:39:03.247131 kubelet[2660]: I0916 04:39:03.247022 2660 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:39:03.248803 kubelet[2660]: I0916 04:39:03.248777 2660 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:39:03.301649 systemd-networkd[1422]: calidd033503375: Gained IPv6LL Sep 16 04:39:03.313577 systemd-networkd[1422]: cali29a3abc4f52: Link UP Sep 16 04:39:03.314304 systemd-networkd[1422]: cali29a3abc4f52: Gained carrier Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.221 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.238 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0 calico-kube-controllers-86d8c7cc48- calico-system 3bf55d73-1b48-4bbc-9036-dc58b5a86afe 850 0 2025-09-16 04:38:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d8c7cc48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86d8c7cc48-j9qwt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali29a3abc4f52 [] [] }} ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.238 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.274 [INFO][4731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" HandleID="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Workload="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.274 [INFO][4731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" HandleID="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Workload="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86d8c7cc48-j9qwt", "timestamp":"2025-09-16 04:39:03.274231015 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.274 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.274 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.274 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.282 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.287 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.291 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.293 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.295 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.295 [INFO][4731] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.296 [INFO][4731] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.302 [INFO][4731] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.308 [INFO][4731] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.308 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" host="localhost" Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.308 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:03.330317 containerd[1519]: 2025-09-16 04:39:03.308 [INFO][4731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" HandleID="k8s-pod-network.7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Workload="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.310 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0", GenerateName:"calico-kube-controllers-86d8c7cc48-", Namespace:"calico-system", SelfLink:"", UID:"3bf55d73-1b48-4bbc-9036-dc58b5a86afe", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d8c7cc48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86d8c7cc48-j9qwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29a3abc4f52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.311 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.311 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29a3abc4f52 ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.315 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.315 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0", GenerateName:"calico-kube-controllers-86d8c7cc48-", Namespace:"calico-system", SelfLink:"", UID:"3bf55d73-1b48-4bbc-9036-dc58b5a86afe", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d8c7cc48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad", Pod:"calico-kube-controllers-86d8c7cc48-j9qwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29a3abc4f52", MAC:"96:8f:43:c2:6b:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:03.331173 containerd[1519]: 2025-09-16 04:39:03.327 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" Namespace="calico-system" Pod="calico-kube-controllers-86d8c7cc48-j9qwt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86d8c7cc48--j9qwt-eth0" Sep 16 04:39:03.349328 containerd[1519]: time="2025-09-16T04:39:03.348913626Z" level=info msg="connecting to shim 7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad" address="unix:///run/containerd/s/1801394bb9d866689dfd185b4860e1787b6d64a30aa83467e8ac029ac74aa8ad" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:03.361244 kubelet[2660]: E0916 04:39:03.361069 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:03.361244 kubelet[2660]: E0916 04:39:03.361132 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:03.375539 kubelet[2660]: I0916 04:39:03.375466 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2xz6q" podStartSLOduration=20.211680484 podStartE2EDuration="22.375446343s" podCreationTimestamp="2025-09-16 04:38:41 +0000 UTC" firstStartedPulling="2025-09-16 04:39:00.514376388 +0000 UTC m=+38.429172074" lastFinishedPulling="2025-09-16 04:39:02.678142247 +0000 UTC m=+40.592937933" observedRunningTime="2025-09-16 04:39:03.373935373 +0000 UTC m=+41.288731059" watchObservedRunningTime="2025-09-16 04:39:03.375446343 +0000 UTC m=+41.290242029" Sep 16 04:39:03.376744 systemd[1]: Started cri-containerd-7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad.scope - libcontainer container 7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad. Sep 16 04:39:03.393266 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:03.419794 containerd[1519]: time="2025-09-16T04:39:03.419753551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d8c7cc48-j9qwt,Uid:3bf55d73-1b48-4bbc-9036-dc58b5a86afe,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad\"" Sep 16 04:39:03.421365 containerd[1519]: time="2025-09-16T04:39:03.421324680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:39:03.924749 kubelet[2660]: I0916 04:39:03.924703 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:39:03.927037 kubelet[2660]: E0916 04:39:03.927004 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:03.941700 kubelet[2660]: I0916 04:39:03.941513 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jjvtr" podStartSLOduration=35.941498402 podStartE2EDuration="35.941498402s" podCreationTimestamp="2025-09-16 04:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:39:03.385502505 +0000 UTC m=+41.300298191" watchObservedRunningTime="2025-09-16 04:39:03.941498402 +0000 UTC m=+41.856294088" Sep 16 04:39:04.170589 containerd[1519]: time="2025-09-16T04:39:04.170546901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nxklv,Uid:32497091-782d-420f-ae11-8538d9e76009,Namespace:calico-system,Attempt:0,}" Sep 16 04:39:04.170589 containerd[1519]: time="2025-09-16T04:39:04.170547501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-td8sm,Uid:10063a3f-440a-47f3-879b-39e52a324bb6,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:39:04.170589 containerd[1519]: time="2025-09-16T04:39:04.170928734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-bzhph,Uid:1c89780b-17c6-45c4-84ba-46926b8f574a,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:39:04.345192 systemd-networkd[1422]: calic4e2c85ca08: Link UP Sep 16 04:39:04.345646 systemd-networkd[1422]: calic4e2c85ca08: Gained carrier Sep 16 04:39:04.364248 kubelet[2660]: E0916 04:39:04.363440 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:04.364248 kubelet[2660]: E0916 04:39:04.363715 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.214 [INFO][4823] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.232 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0 calico-apiserver-598d56f686- calico-apiserver 10063a3f-440a-47f3-879b-39e52a324bb6 848 0 2025-09-16 04:38:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598d56f686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-598d56f686-td8sm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4e2c85ca08 [] [] }} ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.232 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.279 [INFO][4869] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" HandleID="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Workload="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.279 [INFO][4869] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" HandleID="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Workload="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-598d56f686-td8sm", "timestamp":"2025-09-16 04:39:04.279311137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.279 [INFO][4869] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.279 [INFO][4869] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.279 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.294 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.304 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.315 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.320 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.323 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.323 [INFO][4869] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.324 [INFO][4869] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238 Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.328 [INFO][4869] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.333 [INFO][4869] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.333 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" host="localhost" Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.333 [INFO][4869] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:04.378371 containerd[1519]: 2025-09-16 04:39:04.333 [INFO][4869] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" HandleID="k8s-pod-network.9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Workload="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.338 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0", GenerateName:"calico-apiserver-598d56f686-", Namespace:"calico-apiserver", SelfLink:"", UID:"10063a3f-440a-47f3-879b-39e52a324bb6", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d56f686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-598d56f686-td8sm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4e2c85ca08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.338 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.338 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4e2c85ca08 ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.346 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.346 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0", GenerateName:"calico-apiserver-598d56f686-", Namespace:"calico-apiserver", SelfLink:"", UID:"10063a3f-440a-47f3-879b-39e52a324bb6", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d56f686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238", Pod:"calico-apiserver-598d56f686-td8sm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4e2c85ca08", MAC:"46:2f:97:f6:70:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.379184 containerd[1519]: 2025-09-16 04:39:04.375 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-td8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--td8sm-eth0" Sep 16 04:39:04.382796 kubelet[2660]: E0916 04:39:04.382762 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:04.438534 containerd[1519]: time="2025-09-16T04:39:04.438477327Z" level=info msg="connecting to shim 9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238" address="unix:///run/containerd/s/80c703971b9ef22553adae02c8aa56f48afcae5e6a9b73ae3b787badbc0b611b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:04.478161 systemd-networkd[1422]: cali9d425911a5c: Link UP Sep 16 04:39:04.480263 systemd-networkd[1422]: cali9d425911a5c: Gained carrier Sep 16 04:39:04.497746 systemd[1]: Started cri-containerd-9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238.scope - libcontainer container 9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238. Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.211 [INFO][4834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.227 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0 calico-apiserver-598d56f686- calico-apiserver 1c89780b-17c6-45c4-84ba-46926b8f574a 851 0 2025-09-16 04:38:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:598d56f686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-598d56f686-bzhph eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d425911a5c [] [] }} ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.227 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.299 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" HandleID="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Workload="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.299 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" HandleID="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Workload="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-598d56f686-bzhph", "timestamp":"2025-09-16 04:39:04.299023559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.299 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.333 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.334 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.401 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.421 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.438 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.442 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.449 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.449 [INFO][4862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.452 [INFO][4862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832 Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.461 [INFO][4862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.467 [INFO][4862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.467 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" host="localhost" Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.467 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:04.500412 containerd[1519]: 2025-09-16 04:39:04.467 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" HandleID="k8s-pod-network.0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Workload="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.471 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0", GenerateName:"calico-apiserver-598d56f686-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c89780b-17c6-45c4-84ba-46926b8f574a", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d56f686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-598d56f686-bzhph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d425911a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.471 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.471 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d425911a5c ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.480 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.482 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0", GenerateName:"calico-apiserver-598d56f686-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c89780b-17c6-45c4-84ba-46926b8f574a", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"598d56f686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832", Pod:"calico-apiserver-598d56f686-bzhph", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d425911a5c", MAC:"be:a8:cc:4d:a9:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.501197 containerd[1519]: 2025-09-16 04:39:04.495 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" Namespace="calico-apiserver" Pod="calico-apiserver-598d56f686-bzhph" WorkloadEndpoint="localhost-k8s-calico--apiserver--598d56f686--bzhph-eth0" Sep 16 04:39:04.514437 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:04.535563 containerd[1519]: time="2025-09-16T04:39:04.535354271Z" level=info msg="connecting to shim 0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832" address="unix:///run/containerd/s/cf92b46c5fd89768908f185204c8051de7a1a6063c697d200656c605bd52d376" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:04.582113 systemd-networkd[1422]: cali29a3abc4f52: Gained IPv6LL Sep 16 04:39:04.591230 containerd[1519]: time="2025-09-16T04:39:04.590243419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-td8sm,Uid:10063a3f-440a-47f3-879b-39e52a324bb6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238\"" Sep 16 04:39:04.595278 systemd-networkd[1422]: cali519b94d9edd: Link UP Sep 16 04:39:04.596333 systemd-networkd[1422]: cali519b94d9edd: Gained carrier Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.246 [INFO][4817] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.276 [INFO][4817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--nxklv-eth0 goldmane-7988f88666- calico-system 32497091-782d-420f-ae11-8538d9e76009 849 0 2025-09-16 04:38:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-nxklv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali519b94d9edd [] [] }} ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.277 [INFO][4817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.332 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" HandleID="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Workload="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.332 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" HandleID="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Workload="localhost-k8s-goldmane--7988f88666--nxklv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-nxklv", "timestamp":"2025-09-16 04:39:04.332127125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.332 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.468 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.469 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.495 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.524 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.531 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.535 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.539 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.539 [INFO][4880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.545 [INFO][4880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609 Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.554 [INFO][4880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.567 [INFO][4880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.567 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" host="localhost" Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.567 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:39:04.620206 containerd[1519]: 2025-09-16 04:39:04.567 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" HandleID="k8s-pod-network.9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Workload="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.578 [INFO][4817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--nxklv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"32497091-782d-420f-ae11-8538d9e76009", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-nxklv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali519b94d9edd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.581 [INFO][4817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.582 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali519b94d9edd ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.596 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.598 [INFO][4817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--nxklv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"32497091-782d-420f-ae11-8538d9e76009", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609", Pod:"goldmane-7988f88666-nxklv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali519b94d9edd", MAC:"c2:8a:be:03:66:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:39:04.620826 containerd[1519]: 2025-09-16 04:39:04.617 [INFO][4817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" Namespace="calico-system" Pod="goldmane-7988f88666-nxklv" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--nxklv-eth0" Sep 16 04:39:04.637777 systemd[1]: Started cri-containerd-0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832.scope - libcontainer container 0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832. Sep 16 04:39:04.652535 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:04.659535 containerd[1519]: time="2025-09-16T04:39:04.659464292Z" level=info msg="connecting to shim 9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609" address="unix:///run/containerd/s/b34c397819f86e065388c31a05111e54cc7a59d17c15163bb6be54e534f7ea36" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:39:04.692776 systemd[1]: Started cri-containerd-9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609.scope - libcontainer container 9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609. Sep 16 04:39:04.697850 containerd[1519]: time="2025-09-16T04:39:04.697710680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-598d56f686-bzhph,Uid:1c89780b-17c6-45c4-84ba-46926b8f574a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832\"" Sep 16 04:39:04.709952 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:39:04.744209 containerd[1519]: time="2025-09-16T04:39:04.744169069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nxklv,Uid:32497091-782d-420f-ae11-8538d9e76009,Namespace:calico-system,Attempt:0,} returns sandbox id \"9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609\"" Sep 16 04:39:05.014620 systemd-networkd[1422]: vxlan.calico: Link UP Sep 16 04:39:05.014626 systemd-networkd[1422]: vxlan.calico: Gained carrier Sep 16 04:39:05.341896 containerd[1519]: time="2025-09-16T04:39:05.341305753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:05.343494 containerd[1519]: time="2025-09-16T04:39:05.343443354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 16 04:39:05.344223 containerd[1519]: time="2025-09-16T04:39:05.344190580Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:05.347439 containerd[1519]: time="2025-09-16T04:39:05.347400760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:05.349244 containerd[1519]: time="2025-09-16T04:39:05.349160207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.927807007s" Sep 16 04:39:05.349472 containerd[1519]: time="2025-09-16T04:39:05.349357123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 16 04:39:05.351013 containerd[1519]: time="2025-09-16T04:39:05.350984133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:39:05.361301 containerd[1519]: time="2025-09-16T04:39:05.361258021Z" level=info msg="CreateContainer within sandbox \"7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:39:05.375463 kubelet[2660]: E0916 04:39:05.375421 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:05.379825 containerd[1519]: time="2025-09-16T04:39:05.379617598Z" level=info msg="Container 60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:05.391951 containerd[1519]: time="2025-09-16T04:39:05.391908169Z" level=info msg="CreateContainer within sandbox \"7ff76c6ef57787ba3c86a51ebf64a17a09b82bfc87d7b8976aa545fd0ed5f4ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\"" Sep 16 04:39:05.392499 containerd[1519]: time="2025-09-16T04:39:05.392314481Z" level=info msg="StartContainer for \"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\"" Sep 16 04:39:05.394381 containerd[1519]: time="2025-09-16T04:39:05.393915771Z" level=info msg="connecting to shim 60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9" address="unix:///run/containerd/s/1801394bb9d866689dfd185b4860e1787b6d64a30aa83467e8ac029ac74aa8ad" protocol=ttrpc version=3 Sep 16 04:39:05.416695 systemd[1]: Started cri-containerd-60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9.scope - libcontainer container 60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9. Sep 16 04:39:05.468297 containerd[1519]: time="2025-09-16T04:39:05.468258943Z" level=info msg="StartContainer for \"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\" returns successfully" Sep 16 04:39:05.734364 systemd-networkd[1422]: cali519b94d9edd: Gained IPv6LL Sep 16 04:39:05.734644 systemd-networkd[1422]: calic4e2c85ca08: Gained IPv6LL Sep 16 04:39:06.054335 systemd-networkd[1422]: cali9d425911a5c: Gained IPv6LL Sep 16 04:39:06.397928 kubelet[2660]: I0916 04:39:06.397783 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86d8c7cc48-j9qwt" podStartSLOduration=22.467943479 podStartE2EDuration="24.397765568s" podCreationTimestamp="2025-09-16 04:38:42 +0000 UTC" firstStartedPulling="2025-09-16 04:39:03.421001007 +0000 UTC m=+41.335796693" lastFinishedPulling="2025-09-16 04:39:05.350823096 +0000 UTC m=+43.265618782" observedRunningTime="2025-09-16 04:39:06.39764761 +0000 UTC m=+44.312443336" watchObservedRunningTime="2025-09-16 04:39:06.397765568 +0000 UTC m=+44.312561214" Sep 16 04:39:06.435781 containerd[1519]: time="2025-09-16T04:39:06.435739036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\" id:\"7f99be9b5109d067dd03abb40b374bc5a3309640ee1b81e07daebb90422883cc\" pid:5261 exited_at:{seconds:1757997546 nanos:435119407}" Sep 16 04:39:06.797077 containerd[1519]: time="2025-09-16T04:39:06.797038015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:06.798265 containerd[1519]: time="2025-09-16T04:39:06.798238953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 16 04:39:06.799124 containerd[1519]: time="2025-09-16T04:39:06.799103377Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:06.801351 containerd[1519]: time="2025-09-16T04:39:06.801298697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:06.801842 containerd[1519]: time="2025-09-16T04:39:06.801808368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.450794876s" Sep 16 04:39:06.801842 containerd[1519]: time="2025-09-16T04:39:06.801841287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:39:06.802929 containerd[1519]: time="2025-09-16T04:39:06.802900388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:39:06.803891 containerd[1519]: time="2025-09-16T04:39:06.803862650Z" level=info msg="CreateContainer within sandbox \"9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:39:06.810452 containerd[1519]: time="2025-09-16T04:39:06.810408931Z" level=info msg="Container 84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:06.817928 containerd[1519]: time="2025-09-16T04:39:06.817863795Z" level=info msg="CreateContainer within sandbox \"9cb14c08bc3215c6647eda4073902739f4287e135da13fd5346c913610c9f238\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2\"" Sep 16 04:39:06.818671 containerd[1519]: time="2025-09-16T04:39:06.818578422Z" level=info msg="StartContainer for \"84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2\"" Sep 16 04:39:06.819793 containerd[1519]: time="2025-09-16T04:39:06.819755761Z" level=info msg="connecting to shim 84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2" address="unix:///run/containerd/s/80c703971b9ef22553adae02c8aa56f48afcae5e6a9b73ae3b787badbc0b611b" protocol=ttrpc version=3 Sep 16 04:39:06.843700 systemd[1]: Started cri-containerd-84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2.scope - libcontainer container 84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2. Sep 16 04:39:06.890431 containerd[1519]: time="2025-09-16T04:39:06.890391034Z" level=info msg="StartContainer for \"84652cb472503c3f9ea20576d0d20f42ad18a74ed8596f7fad2ed7d43e5a94f2\" returns successfully" Sep 16 04:39:06.950099 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Sep 16 04:39:07.041251 containerd[1519]: time="2025-09-16T04:39:07.040652194Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:07.041423 containerd[1519]: time="2025-09-16T04:39:07.041277223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:39:07.043097 containerd[1519]: time="2025-09-16T04:39:07.043068231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 240.135084ms" Sep 16 04:39:07.043097 containerd[1519]: time="2025-09-16T04:39:07.043100391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:39:07.044199 containerd[1519]: time="2025-09-16T04:39:07.044170652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:39:07.045497 containerd[1519]: time="2025-09-16T04:39:07.045467109Z" level=info msg="CreateContainer within sandbox \"0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:39:07.055455 containerd[1519]: time="2025-09-16T04:39:07.055374132Z" level=info msg="Container 66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:07.065033 containerd[1519]: time="2025-09-16T04:39:07.064989801Z" level=info msg="CreateContainer within sandbox \"0fa9b5caaed7f1a935691274edac9a3a3c97ac44f9add9751b37c94c11272832\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20\"" Sep 16 04:39:07.065679 containerd[1519]: time="2025-09-16T04:39:07.065653590Z" level=info msg="StartContainer for \"66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20\"" Sep 16 04:39:07.066698 containerd[1519]: time="2025-09-16T04:39:07.066673811Z" level=info msg="connecting to shim 66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20" address="unix:///run/containerd/s/cf92b46c5fd89768908f185204c8051de7a1a6063c697d200656c605bd52d376" protocol=ttrpc version=3 Sep 16 04:39:07.088673 systemd[1]: Started cri-containerd-66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20.scope - libcontainer container 66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20. Sep 16 04:39:07.189973 containerd[1519]: time="2025-09-16T04:39:07.189909420Z" level=info msg="StartContainer for \"66cc7606a2c3360174cf1632b342b9db77aef7cc309a746533c18c2ea955bb20\" returns successfully" Sep 16 04:39:07.398059 kubelet[2660]: I0916 04:39:07.397360 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598d56f686-td8sm" podStartSLOduration=28.192010141 podStartE2EDuration="30.39734309s" podCreationTimestamp="2025-09-16 04:38:37 +0000 UTC" firstStartedPulling="2025-09-16 04:39:04.597316883 +0000 UTC m=+42.512112569" lastFinishedPulling="2025-09-16 04:39:06.802649832 +0000 UTC m=+44.717445518" observedRunningTime="2025-09-16 04:39:07.396269069 +0000 UTC m=+45.311064715" watchObservedRunningTime="2025-09-16 04:39:07.39734309 +0000 UTC m=+45.312138776" Sep 16 04:39:07.410820 kubelet[2660]: I0916 04:39:07.409262 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-598d56f686-bzhph" podStartSLOduration=28.065160461 podStartE2EDuration="30.409246318s" podCreationTimestamp="2025-09-16 04:38:37 +0000 UTC" firstStartedPulling="2025-09-16 04:39:04.69977408 +0000 UTC m=+42.614569766" lastFinishedPulling="2025-09-16 04:39:07.043859897 +0000 UTC m=+44.958655623" observedRunningTime="2025-09-16 04:39:07.408776927 +0000 UTC m=+45.323572613" watchObservedRunningTime="2025-09-16 04:39:07.409246318 +0000 UTC m=+45.324042004" Sep 16 04:39:08.111081 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:49016.service - OpenSSH per-connection server daemon (10.0.0.1:49016). Sep 16 04:39:08.206045 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 49016 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:08.207367 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:08.216247 systemd-logind[1503]: New session 9 of user core. Sep 16 04:39:08.225099 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:39:08.392808 kubelet[2660]: I0916 04:39:08.392679 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:39:08.393001 kubelet[2660]: I0916 04:39:08.392837 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:39:08.495090 sshd[5355]: Connection closed by 10.0.0.1 port 49016 Sep 16 04:39:08.495001 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:08.503173 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:49016.service: Deactivated successfully. Sep 16 04:39:08.505944 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:39:08.507294 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:39:08.509229 systemd-logind[1503]: Removed session 9. Sep 16 04:39:09.262456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036694778.mount: Deactivated successfully. Sep 16 04:39:09.655092 containerd[1519]: time="2025-09-16T04:39:09.655049767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:09.655657 containerd[1519]: time="2025-09-16T04:39:09.655625197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 16 04:39:09.656580 containerd[1519]: time="2025-09-16T04:39:09.656549061Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:09.658758 containerd[1519]: time="2025-09-16T04:39:09.658733104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:39:09.659759 containerd[1519]: time="2025-09-16T04:39:09.659438492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.615238081s" Sep 16 04:39:09.659759 containerd[1519]: time="2025-09-16T04:39:09.659473051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 16 04:39:09.674589 containerd[1519]: time="2025-09-16T04:39:09.674501636Z" level=info msg="CreateContainer within sandbox \"9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:39:09.682689 containerd[1519]: time="2025-09-16T04:39:09.681612515Z" level=info msg="Container bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:39:09.687605 containerd[1519]: time="2025-09-16T04:39:09.687571934Z" level=info msg="CreateContainer within sandbox \"9529f571e0f99c7159d178e25c8e9278bd8ac97015ebe54ccbae1a3084e77609\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\"" Sep 16 04:39:09.688060 containerd[1519]: time="2025-09-16T04:39:09.688038846Z" level=info msg="StartContainer for \"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\"" Sep 16 04:39:09.689341 containerd[1519]: time="2025-09-16T04:39:09.689298184Z" level=info msg="connecting to shim bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b" address="unix:///run/containerd/s/b34c397819f86e065388c31a05111e54cc7a59d17c15163bb6be54e534f7ea36" protocol=ttrpc version=3 Sep 16 04:39:09.712691 systemd[1]: Started cri-containerd-bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b.scope - libcontainer container bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b. Sep 16 04:39:09.748793 containerd[1519]: time="2025-09-16T04:39:09.748682174Z" level=info msg="StartContainer for \"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\" returns successfully" Sep 16 04:39:10.447000 kubelet[2660]: I0916 04:39:10.446902 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nxklv" podStartSLOduration=24.532127823 podStartE2EDuration="29.446882538s" podCreationTimestamp="2025-09-16 04:38:41 +0000 UTC" firstStartedPulling="2025-09-16 04:39:04.745575202 +0000 UTC m=+42.660370848" lastFinishedPulling="2025-09-16 04:39:09.660329877 +0000 UTC m=+47.575125563" observedRunningTime="2025-09-16 04:39:10.445461882 +0000 UTC m=+48.360257608" watchObservedRunningTime="2025-09-16 04:39:10.446882538 +0000 UTC m=+48.361678224" Sep 16 04:39:10.517177 containerd[1519]: time="2025-09-16T04:39:10.517136408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\" id:\"30fb4a6b043d163b57051021b77abc231deee0797e5fce748cc3dfb0c2a0fdc1\" pid:5435 exit_status:1 exited_at:{seconds:1757997550 nanos:516516059}" Sep 16 04:39:11.480881 containerd[1519]: time="2025-09-16T04:39:11.480817639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\" id:\"2704a4f5aa8d62e7befbd682dcf4298dc65092eccb570011f7a38c59f65aaaf1\" pid:5463 exit_status:1 exited_at:{seconds:1757997551 nanos:480457645}" Sep 16 04:39:13.505710 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:45992.service - OpenSSH per-connection server daemon (10.0.0.1:45992). Sep 16 04:39:13.560962 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 45992 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:13.562442 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:13.566588 systemd-logind[1503]: New session 10 of user core. Sep 16 04:39:13.580700 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:39:13.760207 sshd[5481]: Connection closed by 10.0.0.1 port 45992 Sep 16 04:39:13.760783 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:13.776107 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:45992.service: Deactivated successfully. Sep 16 04:39:13.779168 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:39:13.779964 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:39:13.782929 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:45998.service - OpenSSH per-connection server daemon (10.0.0.1:45998). Sep 16 04:39:13.783574 systemd-logind[1503]: Removed session 10. Sep 16 04:39:13.836137 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 45998 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:13.837465 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:13.841387 systemd-logind[1503]: New session 11 of user core. Sep 16 04:39:13.848663 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:39:14.065693 sshd[5499]: Connection closed by 10.0.0.1 port 45998 Sep 16 04:39:14.066570 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:14.081002 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:45998.service: Deactivated successfully. Sep 16 04:39:14.084620 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:39:14.086423 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:39:14.090168 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:46006.service - OpenSSH per-connection server daemon (10.0.0.1:46006). Sep 16 04:39:14.093872 systemd-logind[1503]: Removed session 11. Sep 16 04:39:14.153954 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 46006 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:14.155247 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:14.161444 systemd-logind[1503]: New session 12 of user core. Sep 16 04:39:14.166693 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:39:14.350079 sshd[5513]: Connection closed by 10.0.0.1 port 46006 Sep 16 04:39:14.350306 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:14.355820 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:39:14.357309 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:46006.service: Deactivated successfully. Sep 16 04:39:14.362697 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:39:14.365928 systemd-logind[1503]: Removed session 12. Sep 16 04:39:19.369366 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Sep 16 04:39:19.425727 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:19.427086 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:19.433196 systemd-logind[1503]: New session 13 of user core. Sep 16 04:39:19.445744 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:39:19.640613 sshd[5543]: Connection closed by 10.0.0.1 port 46016 Sep 16 04:39:19.640384 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:19.649950 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:46016.service: Deactivated successfully. Sep 16 04:39:19.651788 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:39:19.652495 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:39:19.654783 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:46020.service - OpenSSH per-connection server daemon (10.0.0.1:46020). Sep 16 04:39:19.656599 systemd-logind[1503]: Removed session 13. Sep 16 04:39:19.713641 sshd[5559]: Accepted publickey for core from 10.0.0.1 port 46020 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:19.715301 sshd-session[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:19.719593 systemd-logind[1503]: New session 14 of user core. Sep 16 04:39:19.730724 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:39:19.935622 sshd[5562]: Connection closed by 10.0.0.1 port 46020 Sep 16 04:39:19.936376 sshd-session[5559]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:19.947763 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:46020.service: Deactivated successfully. Sep 16 04:39:19.950041 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:39:19.950815 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:39:19.953383 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:38434.service - OpenSSH per-connection server daemon (10.0.0.1:38434). Sep 16 04:39:19.954053 systemd-logind[1503]: Removed session 14. Sep 16 04:39:20.007883 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 38434 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:20.009277 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:20.014403 systemd-logind[1503]: New session 15 of user core. Sep 16 04:39:20.024695 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:39:21.501134 sshd[5576]: Connection closed by 10.0.0.1 port 38434 Sep 16 04:39:21.501856 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:21.510268 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:38434.service: Deactivated successfully. Sep 16 04:39:21.511918 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:39:21.512107 systemd[1]: session-15.scope: Consumed 538ms CPU time, 69.8M memory peak. Sep 16 04:39:21.513614 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:39:21.520376 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Sep 16 04:39:21.526151 systemd-logind[1503]: Removed session 15. Sep 16 04:39:21.585677 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:21.587196 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:21.591436 systemd-logind[1503]: New session 16 of user core. Sep 16 04:39:21.605698 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:39:21.888917 sshd[5598]: Connection closed by 10.0.0.1 port 38450 Sep 16 04:39:21.889841 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:21.902135 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:38450.service: Deactivated successfully. Sep 16 04:39:21.905265 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:39:21.906068 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:39:21.908788 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:38460.service - OpenSSH per-connection server daemon (10.0.0.1:38460). Sep 16 04:39:21.910140 systemd-logind[1503]: Removed session 16. Sep 16 04:39:21.978714 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 38460 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:21.980060 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:21.984101 systemd-logind[1503]: New session 17 of user core. Sep 16 04:39:21.991689 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:39:22.134514 sshd[5613]: Connection closed by 10.0.0.1 port 38460 Sep 16 04:39:22.135046 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:22.138720 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:38460.service: Deactivated successfully. Sep 16 04:39:22.140464 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:39:22.141456 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:39:22.142408 systemd-logind[1503]: Removed session 17. Sep 16 04:39:25.063231 containerd[1519]: time="2025-09-16T04:39:25.063178997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d59c144b1d90e6e64b1834f83edf6b9f37342013d000f0088df427b58a91e65a\" id:\"05168b03231f89805ecacd0de4fe43c7a20b0a3885e2eb235cc75fa376d4e4fa\" pid:5642 exited_at:{seconds:1757997565 nanos:62739522}" Sep 16 04:39:27.148111 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). Sep 16 04:39:27.208867 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:27.210222 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:27.214301 systemd-logind[1503]: New session 18 of user core. Sep 16 04:39:27.223697 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:39:27.397480 sshd[5670]: Connection closed by 10.0.0.1 port 38476 Sep 16 04:39:27.398048 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:27.401627 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:38476.service: Deactivated successfully. Sep 16 04:39:27.404332 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:39:27.405288 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:39:27.406952 systemd-logind[1503]: Removed session 18. Sep 16 04:39:29.303966 kubelet[2660]: I0916 04:39:29.303690 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:39:29.756786 containerd[1519]: time="2025-09-16T04:39:29.756684878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf40c9c4fdcc21620be86a90ee039191cdc0801b3f51e74baaa115ef3c48fc9b\" id:\"687c60a2f0b2f9b8c9076401c7f0f6bce14c332178f2ef115b699e392a8e8eb3\" pid:5699 exited_at:{seconds:1757997569 nanos:756367200}" Sep 16 04:39:30.511595 kubelet[2660]: I0916 04:39:30.511265 2660 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:39:32.020751 containerd[1519]: time="2025-09-16T04:39:32.020708880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\" id:\"08372c8a68363f5ddac5ae7fb4557fa5396b4fff6392d5654917fa4278db1c9f\" pid:5727 exited_at:{seconds:1757997572 nanos:19472403}" Sep 16 04:39:32.410228 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:37472.service - OpenSSH per-connection server daemon (10.0.0.1:37472). Sep 16 04:39:32.458315 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 37472 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:32.458232 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:32.467455 systemd-logind[1503]: New session 19 of user core. Sep 16 04:39:32.474797 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:39:32.621550 sshd[5741]: Connection closed by 10.0.0.1 port 37472 Sep 16 04:39:32.621334 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:32.625129 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:37472.service: Deactivated successfully. Sep 16 04:39:32.626903 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:39:32.627785 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:39:32.630873 systemd-logind[1503]: Removed session 19. Sep 16 04:39:35.795984 containerd[1519]: time="2025-09-16T04:39:35.795945182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60a204b70d0a2adfb2928096bbf8f97697b833842caf34832ed390bcd7eafee9\" id:\"3a371d3f5bcfd2c6959bdfd5cc53602862ba684982544906c5b7338105b7ab3a\" pid:5768 exited_at:{seconds:1757997575 nanos:795695336}" Sep 16 04:39:37.635624 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:37478.service - OpenSSH per-connection server daemon (10.0.0.1:37478). Sep 16 04:39:37.718094 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 37478 ssh2: RSA SHA256:nFpue0m8aDrwmDvAfxWMvmhmj8J52HeKY0rjGWA1wZw Sep 16 04:39:37.719762 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:39:37.723585 systemd-logind[1503]: New session 20 of user core. Sep 16 04:39:37.741696 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:39:37.927891 sshd[5782]: Connection closed by 10.0.0.1 port 37478 Sep 16 04:39:37.928173 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Sep 16 04:39:37.932557 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:39:37.932782 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:37478.service: Deactivated successfully. Sep 16 04:39:37.936104 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:39:37.939452 systemd-logind[1503]: Removed session 20. Sep 16 04:39:38.168743 kubelet[2660]: E0916 04:39:38.168556 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:39:38.168743 kubelet[2660]: E0916 04:39:38.168677 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"