Sep 10 23:21:58.769976 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:21:58.769998 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:08:24 -00 2025 Sep 10 23:21:58.770008 kernel: KASLR enabled Sep 10 23:21:58.770014 kernel: efi: EFI v2.7 by EDK II Sep 10 23:21:58.770019 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:21:58.770025 kernel: random: crng init done Sep 10 23:21:58.770032 kernel: secureboot: Secure boot disabled Sep 10 23:21:58.770037 kernel: ACPI: Early table checksum verification disabled Sep 10 23:21:58.770043 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:21:58.770050 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:21:58.770056 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770061 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770067 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770073 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770080 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770087 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770093 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770099 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770105 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:21:58.770110 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:21:58.770116 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:21:58.770122 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:21:58.770128 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:21:58.770134 kernel: Zone ranges: Sep 10 23:21:58.770140 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:21:58.770147 kernel: DMA32 empty Sep 10 23:21:58.770153 kernel: Normal empty Sep 10 23:21:58.770159 kernel: Device empty Sep 10 23:21:58.770165 kernel: Movable zone start for each node Sep 10 23:21:58.770170 kernel: Early memory node ranges Sep 10 23:21:58.770176 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:21:58.770182 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:21:58.770188 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:21:58.770194 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:21:58.770200 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:21:58.770206 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:21:58.770212 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:21:58.770219 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:21:58.770225 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:21:58.770231 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:21:58.770240 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:21:58.770246 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:21:58.770252 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:21:58.770260 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:21:58.770266 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:21:58.770273 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:21:58.770279 kernel: psci: probing for conduit method from ACPI. Sep 10 23:21:58.770285 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:21:58.770292 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:21:58.770298 kernel: psci: Trusted OS migration not required Sep 10 23:21:58.770304 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:21:58.770311 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:21:58.770317 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:21:58.770325 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:21:58.770331 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:21:58.770338 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:21:58.770344 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:21:58.770350 kernel: CPU features: detected: Spectre-v4 Sep 10 23:21:58.770357 kernel: CPU features: detected: Spectre-BHB Sep 10 23:21:58.770363 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:21:58.770369 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:21:58.770376 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:21:58.770382 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:21:58.770389 kernel: alternatives: applying boot alternatives Sep 10 23:21:58.770396 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:21:58.770404 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:21:58.770411 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:21:58.770417 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:21:58.770423 kernel: Fallback order for Node 0: 0 Sep 10 23:21:58.770430 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:21:58.770436 kernel: Policy zone: DMA Sep 10 23:21:58.770442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:21:58.770449 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:21:58.770455 kernel: software IO TLB: area num 4. Sep 10 23:21:58.770461 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:21:58.770468 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:21:58.770476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:21:58.770482 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:21:58.770537 kernel: rcu: RCU event tracing is enabled. Sep 10 23:21:58.770546 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:21:58.770552 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:21:58.770559 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:21:58.770566 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:21:58.770572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:21:58.770579 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:21:58.770586 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:21:58.770592 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:21:58.770601 kernel: GICv3: 256 SPIs implemented Sep 10 23:21:58.770607 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:21:58.770614 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:21:58.770620 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:21:58.770627 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:21:58.770633 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:21:58.770640 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:21:58.770654 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:21:58.770661 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:21:58.770668 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:21:58.770675 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:21:58.770681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:21:58.770690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:21:58.770697 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:21:58.770706 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:21:58.770713 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:21:58.770720 kernel: arm-pv: using stolen time PV Sep 10 23:21:58.770727 kernel: Console: colour dummy device 80x25 Sep 10 23:21:58.770734 kernel: ACPI: Core revision 20240827 Sep 10 23:21:58.770741 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:21:58.770748 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:21:58.770754 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:21:58.770764 kernel: landlock: Up and running. Sep 10 23:21:58.770771 kernel: SELinux: Initializing. Sep 10 23:21:58.770778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:21:58.770784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:21:58.770795 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:21:58.770804 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:21:58.770810 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:21:58.770817 kernel: Remapping and enabling EFI services. Sep 10 23:21:58.770824 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:21:58.770837 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:21:58.770844 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:21:58.770851 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:21:58.770861 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:21:58.770868 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:21:58.770875 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:21:58.770883 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:21:58.770892 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:21:58.770902 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:21:58.770909 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:21:58.770916 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:21:58.770923 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:21:58.770930 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:21:58.770938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:21:58.770945 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:21:58.770952 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:21:58.770959 kernel: SMP: Total of 4 processors activated. Sep 10 23:21:58.770970 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:21:58.770981 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:21:58.770988 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:21:58.770995 kernel: CPU features: detected: Common not Private translations Sep 10 23:21:58.771009 kernel: CPU features: detected: CRC32 instructions Sep 10 23:21:58.771018 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:21:58.771025 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:21:58.771032 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:21:58.771053 kernel: CPU features: detected: Privileged Access Never Sep 10 23:21:58.771061 kernel: CPU features: detected: RAS Extension Support Sep 10 23:21:58.771068 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:21:58.771075 kernel: alternatives: applying system-wide alternatives Sep 10 23:21:58.771082 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:21:58.771090 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9064K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 10 23:21:58.771097 kernel: devtmpfs: initialized Sep 10 23:21:58.771104 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:21:58.771111 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:21:58.771118 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:21:58.771126 kernel: 0 pages in range for non-PLT usage Sep 10 23:21:58.771145 kernel: 508576 pages in range for PLT usage Sep 10 23:21:58.771152 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:21:58.771158 kernel: SMBIOS 3.0.0 present. Sep 10 23:21:58.771166 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:21:58.771173 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:21:58.771180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:21:58.771187 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:21:58.771194 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:21:58.771203 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:21:58.771209 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:21:58.771216 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 23:21:58.771223 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:21:58.771230 kernel: cpuidle: using governor menu Sep 10 23:21:58.771237 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:21:58.771244 kernel: ASID allocator initialised with 32768 entries Sep 10 23:21:58.771250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:21:58.771257 kernel: Serial: AMBA PL011 UART driver Sep 10 23:21:58.771265 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:21:58.771272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:21:58.771279 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:21:58.771286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:21:58.771292 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:21:58.771299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:21:58.771306 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:21:58.771313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:21:58.771319 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:21:58.771326 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:21:58.771334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:21:58.771341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:21:58.771348 kernel: ACPI: Interpreter enabled Sep 10 23:21:58.771355 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:21:58.771362 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:21:58.771368 kernel: ACPI: CPU0 has been hot-added Sep 10 23:21:58.771375 kernel: ACPI: CPU1 has been hot-added Sep 10 23:21:58.771382 kernel: ACPI: CPU2 has been hot-added Sep 10 23:21:58.771389 kernel: ACPI: CPU3 has been hot-added Sep 10 23:21:58.771397 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:21:58.771404 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:21:58.771411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:21:58.771553 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:21:58.771618 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:21:58.771690 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:21:58.771749 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:21:58.771807 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:21:58.771816 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:21:58.771824 kernel: PCI host bridge to bus 0000:00 Sep 10 23:21:58.771890 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:21:58.771943 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:21:58.771993 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:21:58.772056 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:21:58.772134 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:21:58.772203 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:21:58.772264 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:21:58.772321 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:21:58.772381 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:21:58.772438 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:21:58.772509 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:21:58.772599 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:21:58.772678 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:21:58.772737 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:21:58.772792 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:21:58.772802 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:21:58.772809 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:21:58.772817 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:21:58.772827 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:21:58.772834 kernel: iommu: Default domain type: Translated Sep 10 23:21:58.772841 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:21:58.772848 kernel: efivars: Registered efivars operations Sep 10 23:21:58.772854 kernel: vgaarb: loaded Sep 10 23:21:58.772861 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:21:58.772869 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:21:58.772876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:21:58.772883 kernel: pnp: PnP ACPI init Sep 10 23:21:58.772958 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:21:58.772968 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:21:58.772975 kernel: NET: Registered PF_INET protocol family Sep 10 23:21:58.772982 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:21:58.772989 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:21:58.772996 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:21:58.773003 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:21:58.773010 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:21:58.773019 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:21:58.773026 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:21:58.773033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:21:58.773040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:21:58.773047 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:21:58.773054 kernel: kvm [1]: HYP mode not available Sep 10 23:21:58.773061 kernel: Initialise system trusted keyrings Sep 10 23:21:58.773068 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:21:58.773074 kernel: Key type asymmetric registered Sep 10 23:21:58.773081 kernel: Asymmetric key parser 'x509' registered Sep 10 23:21:58.773089 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:21:58.773096 kernel: io scheduler mq-deadline registered Sep 10 23:21:58.773103 kernel: io scheduler kyber registered Sep 10 23:21:58.773110 kernel: io scheduler bfq registered Sep 10 23:21:58.773117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:21:58.773124 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:21:58.773131 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:21:58.773192 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:21:58.773201 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:21:58.773210 kernel: thunder_xcv, ver 1.0 Sep 10 23:21:58.773217 kernel: thunder_bgx, ver 1.0 Sep 10 23:21:58.773224 kernel: nicpf, ver 1.0 Sep 10 23:21:58.773230 kernel: nicvf, ver 1.0 Sep 10 23:21:58.773298 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:21:58.773353 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:21:58 UTC (1757546518) Sep 10 23:21:58.773362 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:21:58.773369 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:21:58.773378 kernel: watchdog: NMI not fully supported Sep 10 23:21:58.773385 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:21:58.773392 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:21:58.773399 kernel: Segment Routing with IPv6 Sep 10 23:21:58.773406 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:21:58.773413 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:21:58.773420 kernel: Key type dns_resolver registered Sep 10 23:21:58.773427 kernel: registered taskstats version 1 Sep 10 23:21:58.773434 kernel: Loading compiled-in X.509 certificates Sep 10 23:21:58.773443 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 614348c8450ce34f552a2f872e2a442c01d91c4b' Sep 10 23:21:58.773449 kernel: Demotion targets for Node 0: null Sep 10 23:21:58.773456 kernel: Key type .fscrypt registered Sep 10 23:21:58.773463 kernel: Key type fscrypt-provisioning registered Sep 10 23:21:58.773470 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:21:58.773477 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:21:58.773484 kernel: ima: No architecture policies found Sep 10 23:21:58.773502 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:21:58.773526 kernel: clk: Disabling unused clocks Sep 10 23:21:58.773534 kernel: PM: genpd: Disabling unused power domains Sep 10 23:21:58.773541 kernel: Warning: unable to open an initial console. Sep 10 23:21:58.773548 kernel: Freeing unused kernel memory: 38912K Sep 10 23:21:58.773555 kernel: Run /init as init process Sep 10 23:21:58.773561 kernel: with arguments: Sep 10 23:21:58.773568 kernel: /init Sep 10 23:21:58.773575 kernel: with environment: Sep 10 23:21:58.773582 kernel: HOME=/ Sep 10 23:21:58.773589 kernel: TERM=linux Sep 10 23:21:58.773597 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:21:58.773605 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:21:58.773615 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:21:58.773623 systemd[1]: Detected virtualization kvm. Sep 10 23:21:58.773630 systemd[1]: Detected architecture arm64. Sep 10 23:21:58.773637 systemd[1]: Running in initrd. Sep 10 23:21:58.773652 systemd[1]: No hostname configured, using default hostname. Sep 10 23:21:58.773663 systemd[1]: Hostname set to . Sep 10 23:21:58.773670 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:21:58.773677 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:21:58.773685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:21:58.773692 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:21:58.773700 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:21:58.773707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:21:58.773715 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:21:58.773725 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:21:58.773733 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:21:58.773741 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:21:58.773748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:21:58.773755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:21:58.773763 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:21:58.773770 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:21:58.773779 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:21:58.773787 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:21:58.773794 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:21:58.773802 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:21:58.773809 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:21:58.773817 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:21:58.773825 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:21:58.773832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:21:58.773841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:21:58.773848 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:21:58.773856 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:21:58.773863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:21:58.773870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:21:58.773878 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:21:58.773888 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:21:58.773896 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:21:58.773903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:21:58.773912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:21:58.773919 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:21:58.773927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:21:58.773935 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:21:58.773943 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:21:58.773968 systemd-journald[243]: Collecting audit messages is disabled. Sep 10 23:21:58.773986 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:21:58.773994 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:21:58.774004 systemd-journald[243]: Journal started Sep 10 23:21:58.774022 systemd-journald[243]: Runtime Journal (/run/log/journal/5416777abe3847b6ad02aeb90c68685e) is 6M, max 48.5M, 42.4M free. Sep 10 23:21:58.760190 systemd-modules-load[245]: Inserted module 'overlay' Sep 10 23:21:58.776791 kernel: Bridge firewalling registered Sep 10 23:21:58.776809 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:21:58.775486 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 10 23:21:58.777749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:21:58.780340 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:21:58.781976 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:21:58.783853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:21:58.799626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:21:58.803424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:21:58.806564 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:21:58.808789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:21:58.810802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:21:58.815422 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:21:58.816542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:21:58.822672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:21:58.825824 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:21:58.839233 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:21:58.852786 systemd-resolved[283]: Positive Trust Anchors: Sep 10 23:21:58.852805 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:21:58.852836 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:21:58.857662 systemd-resolved[283]: Defaulting to hostname 'linux'. Sep 10 23:21:58.858706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:21:58.861143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:21:58.914524 kernel: SCSI subsystem initialized Sep 10 23:21:58.918512 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:21:58.926522 kernel: iscsi: registered transport (tcp) Sep 10 23:21:58.940542 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:21:58.940610 kernel: QLogic iSCSI HBA Driver Sep 10 23:21:58.957997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:21:58.972074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:21:58.973470 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:21:59.020162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:21:59.022425 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:21:59.081539 kernel: raid6: neonx8 gen() 15717 MB/s Sep 10 23:21:59.098533 kernel: raid6: neonx4 gen() 15733 MB/s Sep 10 23:21:59.115522 kernel: raid6: neonx2 gen() 13220 MB/s Sep 10 23:21:59.132525 kernel: raid6: neonx1 gen() 10417 MB/s Sep 10 23:21:59.149537 kernel: raid6: int64x8 gen() 6865 MB/s Sep 10 23:21:59.166528 kernel: raid6: int64x4 gen() 7328 MB/s Sep 10 23:21:59.183525 kernel: raid6: int64x2 gen() 6087 MB/s Sep 10 23:21:59.200522 kernel: raid6: int64x1 gen() 5044 MB/s Sep 10 23:21:59.200554 kernel: raid6: using algorithm neonx4 gen() 15733 MB/s Sep 10 23:21:59.217534 kernel: raid6: .... xor() 12314 MB/s, rmw enabled Sep 10 23:21:59.217567 kernel: raid6: using neon recovery algorithm Sep 10 23:21:59.222629 kernel: xor: measuring software checksum speed Sep 10 23:21:59.222661 kernel: 8regs : 21573 MB/sec Sep 10 23:21:59.223765 kernel: 32regs : 20988 MB/sec Sep 10 23:21:59.223779 kernel: arm64_neon : 28032 MB/sec Sep 10 23:21:59.223788 kernel: xor: using function: arm64_neon (28032 MB/sec) Sep 10 23:21:59.277539 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:21:59.283408 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:21:59.289805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:21:59.319418 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 23:21:59.325706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:21:59.329728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:21:59.360242 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 10 23:21:59.385519 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:21:59.387531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:21:59.447215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:21:59.449535 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:21:59.524215 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:21:59.524387 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:21:59.527584 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:21:59.527616 kernel: GPT:9289727 != 19775487 Sep 10 23:21:59.527627 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:21:59.529001 kernel: GPT:9289727 != 19775487 Sep 10 23:21:59.529030 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:21:59.530523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:21:59.541605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:21:59.542765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:21:59.544981 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:21:59.549108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:21:59.561044 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:21:59.570414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:21:59.574532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:21:59.580867 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:21:59.594413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:21:59.600375 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:21:59.601545 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:21:59.608599 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:21:59.609535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:21:59.611465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:21:59.614206 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:21:59.615887 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:21:59.636568 disk-uuid[592]: Primary Header is updated. Sep 10 23:21:59.636568 disk-uuid[592]: Secondary Entries is updated. Sep 10 23:21:59.636568 disk-uuid[592]: Secondary Header is updated. Sep 10 23:21:59.639298 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:21:59.643527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:00.652529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:00.653412 disk-uuid[597]: The operation has completed successfully. Sep 10 23:22:00.675295 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:22:00.675411 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:22:00.703634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:22:00.715637 sh[612]: Success Sep 10 23:22:00.728902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:22:00.728949 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:22:00.728968 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:22:00.736521 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:22:00.763057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:22:00.765592 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:22:00.778329 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:22:00.784507 kernel: BTRFS: device fsid 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (625) Sep 10 23:22:00.786591 kernel: BTRFS info (device dm-0): first mount of filesystem 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e Sep 10 23:22:00.786613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:00.790505 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:22:00.790544 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:22:00.791240 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:22:00.792334 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:22:00.793460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:22:00.794184 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:22:00.795555 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:22:00.817957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 10 23:22:00.818003 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:00.818013 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:00.821771 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:00.821808 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:00.825514 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:00.826720 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:22:00.828354 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:22:00.896552 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:22:00.899134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:22:00.929002 ignition[705]: Ignition 2.21.0 Sep 10 23:22:00.929019 ignition[705]: Stage: fetch-offline Sep 10 23:22:00.929055 ignition[705]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:00.929062 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:00.929229 ignition[705]: parsed url from cmdline: "" Sep 10 23:22:00.929232 ignition[705]: no config URL provided Sep 10 23:22:00.929237 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:22:00.929243 ignition[705]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:22:00.929269 ignition[705]: op(1): [started] loading QEMU firmware config module Sep 10 23:22:00.934329 systemd-networkd[804]: lo: Link UP Sep 10 23:22:00.929273 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:22:00.934333 systemd-networkd[804]: lo: Gained carrier Sep 10 23:22:00.934510 ignition[705]: op(1): [finished] loading QEMU firmware config module Sep 10 23:22:00.935045 systemd-networkd[804]: Enumeration completed Sep 10 23:22:00.935407 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:00.935410 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:22:00.935839 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:22:00.935925 systemd-networkd[804]: eth0: Link UP Sep 10 23:22:00.936231 systemd-networkd[804]: eth0: Gained carrier Sep 10 23:22:00.936240 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:00.939007 systemd[1]: Reached target network.target - Network. Sep 10 23:22:00.955528 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:22:00.986686 ignition[705]: parsing config with SHA512: 13277b96526690c447400cd5d96563f018c730ca3e3a77df529311a1eefb678b0afa71bbabb5dc812a1f6d28998961a17bb2da3eaa62ae638a93ef0f471f9fe4 Sep 10 23:22:00.990733 unknown[705]: fetched base config from "system" Sep 10 23:22:00.990748 unknown[705]: fetched user config from "qemu" Sep 10 23:22:00.991116 ignition[705]: fetch-offline: fetch-offline passed Sep 10 23:22:00.991178 ignition[705]: Ignition finished successfully Sep 10 23:22:00.993240 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:22:00.994670 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:22:00.995477 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:22:01.029458 ignition[812]: Ignition 2.21.0 Sep 10 23:22:01.029478 ignition[812]: Stage: kargs Sep 10 23:22:01.029626 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:01.029635 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:01.032879 ignition[812]: kargs: kargs passed Sep 10 23:22:01.032931 ignition[812]: Ignition finished successfully Sep 10 23:22:01.036523 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:22:01.039370 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:22:01.065729 ignition[820]: Ignition 2.21.0 Sep 10 23:22:01.065747 ignition[820]: Stage: disks Sep 10 23:22:01.065897 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:01.065907 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:01.068902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:22:01.067242 ignition[820]: disks: disks passed Sep 10 23:22:01.070627 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:22:01.067308 ignition[820]: Ignition finished successfully Sep 10 23:22:01.071911 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:22:01.073235 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:22:01.074853 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:22:01.076066 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:22:01.078373 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:22:01.100250 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:22:01.104561 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:22:01.106972 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:22:01.162505 kernel: EXT4-fs (vda9): mounted filesystem e1f6153c-c458-4b1b-a85a-9d30297a863a r/w with ordered data mode. Quota mode: none. Sep 10 23:22:01.162951 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:22:01.164054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:22:01.166053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:22:01.167540 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:22:01.168322 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:22:01.168361 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:22:01.168389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:22:01.178003 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:22:01.180357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:22:01.184936 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 10 23:22:01.184957 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:01.184967 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:01.184983 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:01.186513 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:01.187533 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:22:01.215561 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:22:01.218572 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:22:01.221598 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:22:01.224444 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:22:01.288480 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:22:01.290254 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:22:01.291769 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:22:01.316059 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:01.331648 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:22:01.343676 ignition[952]: INFO : Ignition 2.21.0 Sep 10 23:22:01.343676 ignition[952]: INFO : Stage: mount Sep 10 23:22:01.345936 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:01.345936 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:01.347708 ignition[952]: INFO : mount: mount passed Sep 10 23:22:01.347708 ignition[952]: INFO : Ignition finished successfully Sep 10 23:22:01.349122 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:22:01.350889 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:22:01.784480 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:22:01.785916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:22:01.806503 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 10 23:22:01.806547 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:01.808003 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:01.810509 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:01.810542 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:01.811456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:22:01.847606 ignition[982]: INFO : Ignition 2.21.0 Sep 10 23:22:01.847606 ignition[982]: INFO : Stage: files Sep 10 23:22:01.849463 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:01.849463 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:01.851456 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:22:01.852678 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:22:01.852678 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:22:01.855404 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:22:01.856504 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:22:01.856504 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:22:01.855874 unknown[982]: wrote ssh authorized keys file for user: core Sep 10 23:22:01.859457 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 23:22:01.859457 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 10 23:22:02.074608 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:22:02.480816 systemd-networkd[804]: eth0: Gained IPv6LL Sep 10 23:22:02.962333 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 23:22:02.962333 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:22:02.966334 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:22:02.982301 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:22:02.982301 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:22:02.982301 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 10 23:22:03.640887 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:22:04.536755 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 23:22:04.536755 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:22:04.540561 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:22:04.542856 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:22:04.542856 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:22:04.542856 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:22:04.547462 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:22:04.547462 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:22:04.547462 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:22:04.547462 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:22:04.562854 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:22:04.566318 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:22:04.568652 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:22:04.568652 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:22:04.568652 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:22:04.568652 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:22:04.568652 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:22:04.568652 ignition[982]: INFO : files: files passed Sep 10 23:22:04.568652 ignition[982]: INFO : Ignition finished successfully Sep 10 23:22:04.571144 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:22:04.575352 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:22:04.577011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:22:04.591176 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:22:04.592211 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:22:04.594215 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:22:04.595453 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:04.595453 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:04.597871 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:04.597361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:22:04.599167 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:22:04.600850 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:22:04.641055 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:22:04.641183 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:22:04.643082 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:22:04.644581 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:22:04.646141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:22:04.646969 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:22:04.673753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:22:04.675971 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:22:04.698159 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:22:04.699163 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:22:04.700737 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:22:04.702117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:22:04.702235 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:22:04.704210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:22:04.705869 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:22:04.707178 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:22:04.708439 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:22:04.709997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:22:04.711436 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:22:04.712973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:22:04.714387 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:22:04.715899 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:22:04.717310 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:22:04.718767 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:22:04.720076 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:22:04.720196 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:22:04.722187 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:22:04.723835 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:22:04.725603 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:22:04.727089 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:22:04.728277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:22:04.728398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:22:04.730665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:22:04.730787 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:22:04.732255 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:22:04.733432 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:22:04.738597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:22:04.740567 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:22:04.741285 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:22:04.742479 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:22:04.742575 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:22:04.743805 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:22:04.743875 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:22:04.745077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:22:04.745183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:22:04.746557 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:22:04.746661 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:22:04.752281 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:22:04.753295 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:22:04.753429 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:22:04.776220 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:22:04.776969 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:22:04.777105 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:22:04.778681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:22:04.778776 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:22:04.785471 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:22:04.785587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:22:04.790469 ignition[1038]: INFO : Ignition 2.21.0 Sep 10 23:22:04.790469 ignition[1038]: INFO : Stage: umount Sep 10 23:22:04.791863 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:04.791863 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:04.791863 ignition[1038]: INFO : umount: umount passed Sep 10 23:22:04.791863 ignition[1038]: INFO : Ignition finished successfully Sep 10 23:22:04.791798 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:22:04.793386 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:22:04.793472 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:22:04.794852 systemd[1]: Stopped target network.target - Network. Sep 10 23:22:04.797011 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:22:04.797076 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:22:04.798532 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:22:04.798573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:22:04.799866 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:22:04.799913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:22:04.802801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:22:04.802841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:22:04.804424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:22:04.805919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:22:04.815987 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:22:04.816120 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:22:04.820342 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:22:04.820620 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:22:04.820670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:22:04.823929 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:22:04.824120 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:22:04.824309 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:22:04.827431 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:22:04.827926 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:22:04.829435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:22:04.829467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:22:04.831736 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:22:04.833095 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:22:04.833147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:22:04.834867 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:22:04.834911 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:22:04.837323 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:22:04.837364 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:22:04.838968 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:22:04.842067 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:22:04.854334 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:22:04.857648 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:22:04.858994 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:22:04.859029 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:22:04.860369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:22:04.860395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:22:04.862103 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:22:04.862151 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:22:04.864720 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:22:04.864772 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:22:04.867947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:22:04.868000 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:22:04.870637 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:22:04.872092 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:22:04.872144 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:22:04.874823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:22:04.874864 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:22:04.877418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:22:04.877460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:04.880431 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:22:04.882294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:22:04.883316 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:22:04.883398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:22:04.884981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:22:04.885032 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:22:04.886551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:22:04.886622 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:22:04.888255 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:22:04.889905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:22:04.902821 systemd[1]: Switching root. Sep 10 23:22:04.942682 systemd-journald[243]: Journal stopped Sep 10 23:22:05.668220 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 10 23:22:05.668279 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:22:05.668295 kernel: SELinux: policy capability open_perms=1 Sep 10 23:22:05.668308 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:22:05.668317 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:22:05.668326 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:22:05.668336 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:22:05.668345 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:22:05.668355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:22:05.668364 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:22:05.668374 systemd[1]: Successfully loaded SELinux policy in 56.488ms. Sep 10 23:22:05.668390 kernel: audit: type=1403 audit(1757546525.108:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:22:05.668400 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.129ms. Sep 10 23:22:05.668411 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:22:05.668422 systemd[1]: Detected virtualization kvm. Sep 10 23:22:05.668432 systemd[1]: Detected architecture arm64. Sep 10 23:22:05.668442 systemd[1]: Detected first boot. Sep 10 23:22:05.668453 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:22:05.668463 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:22:05.668473 zram_generator::config[1083]: No configuration found. Sep 10 23:22:05.668486 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:22:05.668514 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:22:05.668525 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:22:05.668535 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:22:05.668546 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:22:05.668558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:22:05.668568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:22:05.668578 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:22:05.668588 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:22:05.668598 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:22:05.668608 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:22:05.668618 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:22:05.668636 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:22:05.668648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:22:05.668661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:22:05.668671 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:22:05.668681 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:22:05.668692 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:22:05.668703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:22:05.668713 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:22:05.668723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:22:05.668733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:22:05.668744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:22:05.668755 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:22:05.668765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:22:05.668775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:22:05.668786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:22:05.668796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:22:05.668806 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:22:05.668816 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:22:05.668827 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:22:05.668838 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:22:05.668848 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:22:05.668859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:22:05.668869 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:22:05.668879 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:22:05.668890 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:22:05.668904 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:22:05.668915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:22:05.668925 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:22:05.668937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:22:05.668947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:22:05.668956 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:22:05.668967 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:22:05.668977 systemd[1]: Reached target machines.target - Containers. Sep 10 23:22:05.668987 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:22:05.668997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:05.669007 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:22:05.669019 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:22:05.669029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:05.669039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:22:05.669049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:05.669059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:22:05.669069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:05.669079 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:22:05.669093 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:22:05.669103 kernel: ACPI: bus type drm_connector registered Sep 10 23:22:05.669115 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:22:05.669125 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:22:05.669134 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:22:05.669145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:05.669155 kernel: loop: module loaded Sep 10 23:22:05.669165 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:22:05.669175 kernel: fuse: init (API version 7.41) Sep 10 23:22:05.669185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:22:05.669196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:22:05.669208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:22:05.669219 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:22:05.669230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:22:05.669241 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:22:05.669270 systemd-journald[1165]: Collecting audit messages is disabled. Sep 10 23:22:05.669293 systemd[1]: Stopped verity-setup.service. Sep 10 23:22:05.669305 systemd-journald[1165]: Journal started Sep 10 23:22:05.669325 systemd-journald[1165]: Runtime Journal (/run/log/journal/5416777abe3847b6ad02aeb90c68685e) is 6M, max 48.5M, 42.4M free. Sep 10 23:22:05.455167 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:22:05.479380 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:22:05.479767 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:22:05.674522 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:22:05.674938 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:22:05.675852 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:22:05.676769 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:22:05.677583 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:22:05.678581 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:22:05.679518 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:22:05.680533 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:22:05.681719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:22:05.682899 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:22:05.683073 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:22:05.684289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:05.684465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:05.685564 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:22:05.685738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:22:05.686817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:05.686968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:05.688241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:22:05.688409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:22:05.689635 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:05.689807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:05.690917 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:22:05.692078 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:22:05.693331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:22:05.694679 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:22:05.706545 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:22:05.708681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:22:05.710513 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:22:05.711409 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:22:05.711438 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:22:05.713149 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:22:05.718415 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:22:05.720842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:05.723652 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:22:05.725415 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:22:05.726616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:22:05.727467 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:22:05.729146 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:22:05.733293 systemd-journald[1165]: Time spent on flushing to /var/log/journal/5416777abe3847b6ad02aeb90c68685e is 12.743ms for 882 entries. Sep 10 23:22:05.733293 systemd-journald[1165]: System Journal (/var/log/journal/5416777abe3847b6ad02aeb90c68685e) is 8M, max 195.6M, 187.6M free. Sep 10 23:22:05.759846 systemd-journald[1165]: Received client request to flush runtime journal. Sep 10 23:22:05.733520 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:22:05.736720 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:22:05.739737 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:22:05.752899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:22:05.757617 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:22:05.759434 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:22:05.764520 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:22:05.769526 kernel: loop0: detected capacity change from 0 to 119320 Sep 10 23:22:05.768908 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:22:05.773234 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:22:05.776862 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:22:05.786712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:22:05.796522 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:22:05.800303 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:22:05.805704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:22:05.809617 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:22:05.817539 kernel: loop1: detected capacity change from 0 to 100600 Sep 10 23:22:05.838509 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 10 23:22:05.838525 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 10 23:22:05.842434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:22:05.844524 kernel: loop2: detected capacity change from 0 to 203944 Sep 10 23:22:05.890519 kernel: loop3: detected capacity change from 0 to 119320 Sep 10 23:22:05.902540 kernel: loop4: detected capacity change from 0 to 100600 Sep 10 23:22:05.910531 kernel: loop5: detected capacity change from 0 to 203944 Sep 10 23:22:05.916045 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:22:05.916458 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 10 23:22:05.919839 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:22:05.919957 systemd[1]: Reloading... Sep 10 23:22:05.990541 zram_generator::config[1257]: No configuration found. Sep 10 23:22:06.061004 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:22:06.123178 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:22:06.123578 systemd[1]: Reloading finished in 203 ms. Sep 10 23:22:06.159035 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:22:06.160457 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:22:06.170623 systemd[1]: Starting ensure-sysext.service... Sep 10 23:22:06.172195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:22:06.189822 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:22:06.189839 systemd[1]: Reloading... Sep 10 23:22:06.191179 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:22:06.191214 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:22:06.191478 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:22:06.191715 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:22:06.192340 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:22:06.192566 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 10 23:22:06.192613 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Sep 10 23:22:06.197150 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:22:06.197166 systemd-tmpfiles[1285]: Skipping /boot Sep 10 23:22:06.202938 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:22:06.202957 systemd-tmpfiles[1285]: Skipping /boot Sep 10 23:22:06.248939 zram_generator::config[1312]: No configuration found. Sep 10 23:22:06.379444 systemd[1]: Reloading finished in 189 ms. Sep 10 23:22:06.399936 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:22:06.405089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:22:06.418640 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:06.420890 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:22:06.422879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:22:06.426692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:22:06.429052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:22:06.430845 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:22:06.436535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:06.438890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:06.440901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:06.446645 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:06.447675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:06.447821 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:06.449471 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:22:06.453089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:06.453288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:06.455833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:06.456003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:06.457809 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:22:06.469152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:06.474774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:06.476776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:06.479702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:06.479817 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:06.481833 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:22:06.488894 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:22:06.491016 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:22:06.492378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:06.492566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:06.493924 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 10 23:22:06.493950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:06.494184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:06.496161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:06.496346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:06.498419 augenrules[1383]: No rules Sep 10 23:22:06.498541 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:22:06.500165 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:22:06.501999 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:06.502176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:06.515592 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:06.516343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:06.518281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:06.521885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:22:06.524678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:06.542384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:06.544769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:06.544898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:06.545017 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:22:06.545960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:22:06.548426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:06.549272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:06.550968 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:22:06.551563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:22:06.553924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:06.554073 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:06.559854 systemd[1]: Finished ensure-sysext.service. Sep 10 23:22:06.567551 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:06.568226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:06.578664 augenrules[1404]: /sbin/augenrules: No change Sep 10 23:22:06.579695 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:22:06.591525 augenrules[1459]: No rules Sep 10 23:22:06.599655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:22:06.601059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:22:06.601115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:22:06.604861 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:22:06.606288 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:06.606460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:06.644600 systemd-resolved[1351]: Positive Trust Anchors: Sep 10 23:22:06.644617 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:22:06.644659 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:22:06.654395 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 10 23:22:06.656814 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:22:06.657996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:22:06.659275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:22:06.662169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:22:06.683994 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:22:06.686999 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:22:06.688861 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:22:06.690025 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:22:06.692721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:22:06.693892 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:22:06.694899 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:22:06.694925 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:22:06.695649 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:22:06.696633 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:22:06.698825 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:22:06.700421 systemd-networkd[1464]: lo: Link UP Sep 10 23:22:06.700433 systemd-networkd[1464]: lo: Gained carrier Sep 10 23:22:06.701265 systemd-networkd[1464]: Enumeration completed Sep 10 23:22:06.701662 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:22:06.703713 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:06.703722 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:22:06.706540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:22:06.708534 systemd-networkd[1464]: eth0: Link UP Sep 10 23:22:06.709172 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:22:06.710693 systemd-networkd[1464]: eth0: Gained carrier Sep 10 23:22:06.710715 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:06.712321 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:22:06.713762 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:22:06.714831 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:22:06.726540 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:22:06.727228 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Sep 10 23:22:06.727412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:22:07.227605 systemd-resolved[1351]: Clock change detected. Flushing caches. Sep 10 23:22:07.227652 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:22:07.227695 systemd-timesyncd[1465]: Initial clock synchronization to Wed 2025-09-10 23:22:07.227566 UTC. Sep 10 23:22:07.229160 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:22:07.231941 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:22:07.233098 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:22:07.234588 systemd[1]: Reached target network.target - Network. Sep 10 23:22:07.235371 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:22:07.236407 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:22:07.237401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:22:07.237431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:22:07.239156 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:22:07.242483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:22:07.244474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:22:07.247833 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:22:07.249744 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:22:07.250667 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:22:07.255907 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:22:07.258120 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:22:07.261024 jq[1495]: false Sep 10 23:22:07.261524 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:22:07.263476 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:22:07.268735 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:22:07.271356 extend-filesystems[1496]: Found /dev/vda6 Sep 10 23:22:07.272474 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:22:07.274770 extend-filesystems[1496]: Found /dev/vda9 Sep 10 23:22:07.275429 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:22:07.277239 extend-filesystems[1496]: Checking size of /dev/vda9 Sep 10 23:22:07.277425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:22:07.277801 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:22:07.279483 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:22:07.280998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:22:07.285292 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:22:07.286574 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:22:07.286753 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:22:07.291786 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:22:07.292994 jq[1515]: true Sep 10 23:22:07.291965 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:22:07.303741 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:22:07.309942 jq[1523]: true Sep 10 23:22:07.310431 extend-filesystems[1496]: Resized partition /dev/vda9 Sep 10 23:22:07.315119 update_engine[1513]: I20250910 23:22:07.314900 1513 main.cc:92] Flatcar Update Engine starting Sep 10 23:22:07.323309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:22:07.324362 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:22:07.324497 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:22:07.324728 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:22:07.328779 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:22:07.336566 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:22:07.350362 tar[1520]: linux-arm64/helm Sep 10 23:22:07.367432 dbus-daemon[1492]: [system] SELinux support is enabled Sep 10 23:22:07.367606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:22:07.370306 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:22:07.370338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:22:07.376735 update_engine[1513]: I20250910 23:22:07.374787 1513 update_check_scheduler.cc:74] Next update check in 8m56s Sep 10 23:22:07.377003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:22:07.377030 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:22:07.380964 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:22:07.387050 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:22:07.389802 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:22:07.392342 systemd-logind[1504]: New seat seat0. Sep 10 23:22:07.393198 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:22:07.399400 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:22:07.416772 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:22:07.416772 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:22:07.416772 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:22:07.427908 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Sep 10 23:22:07.428647 bash[1558]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:22:07.418634 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:22:07.418822 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:22:07.438601 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:22:07.444842 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:22:07.449084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:07.452853 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:22:07.495218 containerd[1524]: time="2025-09-10T23:22:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:22:07.496975 containerd[1524]: time="2025-09-10T23:22:07.496825578Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 23:22:07.507626 containerd[1524]: time="2025-09-10T23:22:07.507581418Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.92µs" Sep 10 23:22:07.507626 containerd[1524]: time="2025-09-10T23:22:07.507617098Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:22:07.507719 containerd[1524]: time="2025-09-10T23:22:07.507635978Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:22:07.507930 containerd[1524]: time="2025-09-10T23:22:07.507791738Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:22:07.507930 containerd[1524]: time="2025-09-10T23:22:07.507812778Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:22:07.507930 containerd[1524]: time="2025-09-10T23:22:07.507837178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:22:07.507930 containerd[1524]: time="2025-09-10T23:22:07.507883698Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:22:07.507930 containerd[1524]: time="2025-09-10T23:22:07.507895178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508252 containerd[1524]: time="2025-09-10T23:22:07.508114178Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508252 containerd[1524]: time="2025-09-10T23:22:07.508134298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508252 containerd[1524]: time="2025-09-10T23:22:07.508152698Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508252 containerd[1524]: time="2025-09-10T23:22:07.508161618Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508252 containerd[1524]: time="2025-09-10T23:22:07.508233018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508614 containerd[1524]: time="2025-09-10T23:22:07.508447018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508614 containerd[1524]: time="2025-09-10T23:22:07.508483618Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:22:07.508614 containerd[1524]: time="2025-09-10T23:22:07.508495458Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:22:07.508614 containerd[1524]: time="2025-09-10T23:22:07.508530938Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:22:07.508833 containerd[1524]: time="2025-09-10T23:22:07.508748418Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:22:07.508833 containerd[1524]: time="2025-09-10T23:22:07.508820298Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:22:07.512080 containerd[1524]: time="2025-09-10T23:22:07.511977058Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:22:07.512080 containerd[1524]: time="2025-09-10T23:22:07.512045698Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:22:07.512080 containerd[1524]: time="2025-09-10T23:22:07.512061778Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:22:07.512080 containerd[1524]: time="2025-09-10T23:22:07.512081418Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512124378Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512138738Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512150258Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512166018Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512180818Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512190738Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:22:07.512204 containerd[1524]: time="2025-09-10T23:22:07.512199658Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:22:07.512330 containerd[1524]: time="2025-09-10T23:22:07.512211498Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512344498Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512373458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512388738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512409258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512420378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512430898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512441818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512451058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512461898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512471578Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:22:07.512596 containerd[1524]: time="2025-09-10T23:22:07.512482098Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:22:07.512790 containerd[1524]: time="2025-09-10T23:22:07.512658778Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:22:07.512790 containerd[1524]: time="2025-09-10T23:22:07.512681338Z" level=info msg="Start snapshots syncer" Sep 10 23:22:07.512790 containerd[1524]: time="2025-09-10T23:22:07.512716378Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:22:07.513087 containerd[1524]: time="2025-09-10T23:22:07.513045178Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:22:07.513176 containerd[1524]: time="2025-09-10T23:22:07.513098018Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:22:07.513176 containerd[1524]: time="2025-09-10T23:22:07.513172058Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:22:07.513442 containerd[1524]: time="2025-09-10T23:22:07.513417138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:22:07.513480 containerd[1524]: time="2025-09-10T23:22:07.513449578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:22:07.513480 containerd[1524]: time="2025-09-10T23:22:07.513460578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:22:07.513525 containerd[1524]: time="2025-09-10T23:22:07.513482538Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:22:07.513525 containerd[1524]: time="2025-09-10T23:22:07.513495618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:22:07.513525 containerd[1524]: time="2025-09-10T23:22:07.513505978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:22:07.513525 containerd[1524]: time="2025-09-10T23:22:07.513516178Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:22:07.513590 containerd[1524]: time="2025-09-10T23:22:07.513541938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:22:07.513590 containerd[1524]: time="2025-09-10T23:22:07.513553978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:22:07.513590 containerd[1524]: time="2025-09-10T23:22:07.513564378Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:22:07.513640 containerd[1524]: time="2025-09-10T23:22:07.513604258Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:22:07.513640 containerd[1524]: time="2025-09-10T23:22:07.513619138Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:22:07.513640 containerd[1524]: time="2025-09-10T23:22:07.513627538Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:22:07.513640 containerd[1524]: time="2025-09-10T23:22:07.513636458Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:22:07.513706 containerd[1524]: time="2025-09-10T23:22:07.513645218Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:22:07.513706 containerd[1524]: time="2025-09-10T23:22:07.513657778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:22:07.513706 containerd[1524]: time="2025-09-10T23:22:07.513688578Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:22:07.514038 containerd[1524]: time="2025-09-10T23:22:07.513765978Z" level=info msg="runtime interface created" Sep 10 23:22:07.514038 containerd[1524]: time="2025-09-10T23:22:07.513775498Z" level=info msg="created NRI interface" Sep 10 23:22:07.514038 containerd[1524]: time="2025-09-10T23:22:07.513783938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:22:07.514038 containerd[1524]: time="2025-09-10T23:22:07.513794778Z" level=info msg="Connect containerd service" Sep 10 23:22:07.514038 containerd[1524]: time="2025-09-10T23:22:07.513827138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:22:07.514637 containerd[1524]: time="2025-09-10T23:22:07.514586138Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582574818Z" level=info msg="Start subscribing containerd event" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582680818Z" level=info msg="Start recovering state" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582784018Z" level=info msg="Start event monitor" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582796538Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582811938Z" level=info msg="Start streaming server" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582836978Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582843938Z" level=info msg="runtime interface starting up..." Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582852938Z" level=info msg="starting plugins..." Sep 10 23:22:07.582944 containerd[1524]: time="2025-09-10T23:22:07.582867538Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:22:07.583622 containerd[1524]: time="2025-09-10T23:22:07.583595618Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:22:07.583679 containerd[1524]: time="2025-09-10T23:22:07.583662578Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:22:07.583745 containerd[1524]: time="2025-09-10T23:22:07.583731498Z" level=info msg="containerd successfully booted in 0.088855s" Sep 10 23:22:07.583926 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:22:07.676455 tar[1520]: linux-arm64/LICENSE Sep 10 23:22:07.676541 tar[1520]: linux-arm64/README.md Sep 10 23:22:07.699445 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:22:08.598149 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:22:08.618310 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:22:08.621507 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:22:08.643720 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:22:08.643941 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:22:08.646419 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:22:08.662700 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:22:08.665247 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:22:08.667184 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:22:08.668511 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:22:09.060427 systemd-networkd[1464]: eth0: Gained IPv6LL Sep 10 23:22:09.063357 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:22:09.064876 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:22:09.067119 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:22:09.069347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:09.071282 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:22:09.094469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:22:09.095988 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:22:09.096203 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:22:09.098475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:22:09.684799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:09.686285 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:22:09.688288 systemd[1]: Startup finished in 1.992s (kernel) + 6.498s (initrd) + 4.137s (userspace) = 12.628s. Sep 10 23:22:09.688844 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:10.088642 kubelet[1634]: E0910 23:22:10.088584 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:10.090783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:10.090912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:10.091206 systemd[1]: kubelet.service: Consumed 776ms CPU time, 257.3M memory peak. Sep 10 23:22:12.341697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:22:12.342703 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). Sep 10 23:22:12.423042 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:12.424984 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:12.431343 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:22:12.432240 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:22:12.437470 systemd-logind[1504]: New session 1 of user core. Sep 10 23:22:12.453607 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:22:12.456023 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:22:12.473104 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:22:12.475034 systemd-logind[1504]: New session c1 of user core. Sep 10 23:22:12.594825 systemd[1652]: Queued start job for default target default.target. Sep 10 23:22:12.604285 systemd[1652]: Created slice app.slice - User Application Slice. Sep 10 23:22:12.604312 systemd[1652]: Reached target paths.target - Paths. Sep 10 23:22:12.604348 systemd[1652]: Reached target timers.target - Timers. Sep 10 23:22:12.605591 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:22:12.615454 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:22:12.615522 systemd[1652]: Reached target sockets.target - Sockets. Sep 10 23:22:12.615565 systemd[1652]: Reached target basic.target - Basic System. Sep 10 23:22:12.615592 systemd[1652]: Reached target default.target - Main User Target. Sep 10 23:22:12.615618 systemd[1652]: Startup finished in 135ms. Sep 10 23:22:12.615745 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:22:12.617050 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:22:12.685473 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:45302.service - OpenSSH per-connection server daemon (10.0.0.1:45302). Sep 10 23:22:12.746680 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 45302 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:12.747979 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:12.752339 systemd-logind[1504]: New session 2 of user core. Sep 10 23:22:12.762449 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:22:12.814569 sshd[1666]: Connection closed by 10.0.0.1 port 45302 Sep 10 23:22:12.815064 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:12.824484 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:45302.service: Deactivated successfully. Sep 10 23:22:12.826748 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:22:12.827460 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:22:12.829804 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:45304.service - OpenSSH per-connection server daemon (10.0.0.1:45304). Sep 10 23:22:12.830273 systemd-logind[1504]: Removed session 2. Sep 10 23:22:12.887757 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 45304 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:12.889230 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:12.894098 systemd-logind[1504]: New session 3 of user core. Sep 10 23:22:12.909456 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:22:12.958966 sshd[1675]: Connection closed by 10.0.0.1 port 45304 Sep 10 23:22:12.959454 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:12.973778 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:45304.service: Deactivated successfully. Sep 10 23:22:12.975341 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:22:12.976150 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:22:12.978863 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:45318.service - OpenSSH per-connection server daemon (10.0.0.1:45318). Sep 10 23:22:12.979317 systemd-logind[1504]: Removed session 3. Sep 10 23:22:13.038717 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 45318 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:13.040024 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:13.043867 systemd-logind[1504]: New session 4 of user core. Sep 10 23:22:13.058479 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:22:13.110570 sshd[1684]: Connection closed by 10.0.0.1 port 45318 Sep 10 23:22:13.110982 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:13.122461 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:45318.service: Deactivated successfully. Sep 10 23:22:13.124715 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:22:13.126435 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:22:13.128659 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:45320.service - OpenSSH per-connection server daemon (10.0.0.1:45320). Sep 10 23:22:13.129533 systemd-logind[1504]: Removed session 4. Sep 10 23:22:13.186373 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 45320 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:13.187756 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:13.192560 systemd-logind[1504]: New session 5 of user core. Sep 10 23:22:13.200817 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:22:13.258076 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:22:13.258390 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:13.272154 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:13.273699 sshd[1693]: Connection closed by 10.0.0.1 port 45320 Sep 10 23:22:13.274223 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:13.283347 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:45320.service: Deactivated successfully. Sep 10 23:22:13.285730 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:22:13.286610 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:22:13.288979 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:45324.service - OpenSSH per-connection server daemon (10.0.0.1:45324). Sep 10 23:22:13.289988 systemd-logind[1504]: Removed session 5. Sep 10 23:22:13.341874 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 45324 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:13.343250 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:13.347805 systemd-logind[1504]: New session 6 of user core. Sep 10 23:22:13.359445 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:22:13.412001 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:22:13.412277 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:13.417063 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:13.422044 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:22:13.422658 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:13.431489 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:13.473821 augenrules[1727]: No rules Sep 10 23:22:13.475029 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:13.475294 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:13.476710 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:13.478306 sshd[1703]: Connection closed by 10.0.0.1 port 45324 Sep 10 23:22:13.478645 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:13.490398 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:45324.service: Deactivated successfully. Sep 10 23:22:13.492815 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:22:13.493709 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:22:13.496772 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:45338.service - OpenSSH per-connection server daemon (10.0.0.1:45338). Sep 10 23:22:13.497459 systemd-logind[1504]: Removed session 6. Sep 10 23:22:13.561226 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 45338 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:13.562622 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:13.567246 systemd-logind[1504]: New session 7 of user core. Sep 10 23:22:13.581506 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:22:13.632923 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:22:13.633192 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:13.915535 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:22:13.934647 (dockerd)[1760]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:22:14.135535 dockerd[1760]: time="2025-09-10T23:22:14.135474778Z" level=info msg="Starting up" Sep 10 23:22:14.136483 dockerd[1760]: time="2025-09-10T23:22:14.136407098Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:22:14.146394 dockerd[1760]: time="2025-09-10T23:22:14.146348018Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 23:22:14.161475 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2691228674-merged.mount: Deactivated successfully. Sep 10 23:22:14.287591 dockerd[1760]: time="2025-09-10T23:22:14.287484818Z" level=info msg="Loading containers: start." Sep 10 23:22:14.299299 kernel: Initializing XFRM netlink socket Sep 10 23:22:14.489059 systemd-networkd[1464]: docker0: Link UP Sep 10 23:22:14.493637 dockerd[1760]: time="2025-09-10T23:22:14.493589938Z" level=info msg="Loading containers: done." Sep 10 23:22:14.505801 dockerd[1760]: time="2025-09-10T23:22:14.505743818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:22:14.505954 dockerd[1760]: time="2025-09-10T23:22:14.505839258Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 23:22:14.505954 dockerd[1760]: time="2025-09-10T23:22:14.505924298Z" level=info msg="Initializing buildkit" Sep 10 23:22:14.527980 dockerd[1760]: time="2025-09-10T23:22:14.527933898Z" level=info msg="Completed buildkit initialization" Sep 10 23:22:14.534136 dockerd[1760]: time="2025-09-10T23:22:14.534078138Z" level=info msg="Daemon has completed initialization" Sep 10 23:22:14.534298 dockerd[1760]: time="2025-09-10T23:22:14.534147698Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:22:14.534506 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:22:15.109560 containerd[1524]: time="2025-09-10T23:22:15.109518498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 10 23:22:16.036521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868122328.mount: Deactivated successfully. Sep 10 23:22:17.108586 containerd[1524]: time="2025-09-10T23:22:17.108508458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:17.109685 containerd[1524]: time="2025-09-10T23:22:17.109440338Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 10 23:22:17.110724 containerd[1524]: time="2025-09-10T23:22:17.110689298Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:17.113994 containerd[1524]: time="2025-09-10T23:22:17.113964618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:17.114807 containerd[1524]: time="2025-09-10T23:22:17.114774378Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.00520928s" Sep 10 23:22:17.115016 containerd[1524]: time="2025-09-10T23:22:17.114884498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 10 23:22:17.116139 containerd[1524]: time="2025-09-10T23:22:17.116115058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 10 23:22:18.633320 containerd[1524]: time="2025-09-10T23:22:18.632907698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:18.633734 containerd[1524]: time="2025-09-10T23:22:18.633699578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 10 23:22:18.634477 containerd[1524]: time="2025-09-10T23:22:18.634446698Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:18.638057 containerd[1524]: time="2025-09-10T23:22:18.638005418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:18.639118 containerd[1524]: time="2025-09-10T23:22:18.638689578Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.522541s" Sep 10 23:22:18.639118 containerd[1524]: time="2025-09-10T23:22:18.638732978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 10 23:22:18.639542 containerd[1524]: time="2025-09-10T23:22:18.639468578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 10 23:22:19.858097 containerd[1524]: time="2025-09-10T23:22:19.858030898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:19.858647 containerd[1524]: time="2025-09-10T23:22:19.858608938Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 10 23:22:19.860316 containerd[1524]: time="2025-09-10T23:22:19.860290458Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:19.863397 containerd[1524]: time="2025-09-10T23:22:19.863356898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:19.864050 containerd[1524]: time="2025-09-10T23:22:19.864015298Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.2243852s" Sep 10 23:22:19.864087 containerd[1524]: time="2025-09-10T23:22:19.864050298Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 10 23:22:19.864478 containerd[1524]: time="2025-09-10T23:22:19.864453778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 10 23:22:20.309684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:22:20.311616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:20.446680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:20.450141 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:20.497595 kubelet[2054]: E0910 23:22:20.497529 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:20.500364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:20.500489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:20.501411 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.6M memory peak. Sep 10 23:22:20.875379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735405508.mount: Deactivated successfully. Sep 10 23:22:21.341788 containerd[1524]: time="2025-09-10T23:22:21.341404578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:21.342100 containerd[1524]: time="2025-09-10T23:22:21.342004218Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 10 23:22:21.344848 containerd[1524]: time="2025-09-10T23:22:21.344804938Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:21.347389 containerd[1524]: time="2025-09-10T23:22:21.347157218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:21.347752 containerd[1524]: time="2025-09-10T23:22:21.347725658Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.48324088s" Sep 10 23:22:21.347781 containerd[1524]: time="2025-09-10T23:22:21.347759418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 10 23:22:21.348541 containerd[1524]: time="2025-09-10T23:22:21.348331418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:22:21.838126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305665.mount: Deactivated successfully. Sep 10 23:22:22.540664 containerd[1524]: time="2025-09-10T23:22:22.540610658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:22.542868 containerd[1524]: time="2025-09-10T23:22:22.542832578Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:22:22.543786 containerd[1524]: time="2025-09-10T23:22:22.543734258Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:22.547324 containerd[1524]: time="2025-09-10T23:22:22.547233658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:22.548184 containerd[1524]: time="2025-09-10T23:22:22.548156698Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.199781s" Sep 10 23:22:22.548237 containerd[1524]: time="2025-09-10T23:22:22.548190778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:22:22.548699 containerd[1524]: time="2025-09-10T23:22:22.548650138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:22:23.279639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340568371.mount: Deactivated successfully. Sep 10 23:22:23.288817 containerd[1524]: time="2025-09-10T23:22:23.288508658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:23.289550 containerd[1524]: time="2025-09-10T23:22:23.289356658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:22:23.290477 containerd[1524]: time="2025-09-10T23:22:23.290447338Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:23.293718 containerd[1524]: time="2025-09-10T23:22:23.293690538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:23.295124 containerd[1524]: time="2025-09-10T23:22:23.294781698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 746.09712ms" Sep 10 23:22:23.295124 containerd[1524]: time="2025-09-10T23:22:23.294814738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:22:23.296505 containerd[1524]: time="2025-09-10T23:22:23.295640778Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 23:22:23.827149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889117076.mount: Deactivated successfully. Sep 10 23:22:25.801955 containerd[1524]: time="2025-09-10T23:22:25.801882658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:25.818396 containerd[1524]: time="2025-09-10T23:22:25.818328378Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 10 23:22:25.819521 containerd[1524]: time="2025-09-10T23:22:25.819474138Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:25.822886 containerd[1524]: time="2025-09-10T23:22:25.822423058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:25.823492 containerd[1524]: time="2025-09-10T23:22:25.823466538Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.52715428s" Sep 10 23:22:25.823546 containerd[1524]: time="2025-09-10T23:22:25.823494418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 10 23:22:30.559885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:22:30.561764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:30.772387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:30.775572 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:30.810676 kubelet[2212]: E0910 23:22:30.810495 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:30.814751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:30.814984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:30.815535 systemd[1]: kubelet.service: Consumed 131ms CPU time, 106.9M memory peak. Sep 10 23:22:30.857393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:30.857526 systemd[1]: kubelet.service: Consumed 131ms CPU time, 106.9M memory peak. Sep 10 23:22:30.859491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:30.880733 systemd[1]: Reload requested from client PID 2227 ('systemctl') (unit session-7.scope)... Sep 10 23:22:30.880747 systemd[1]: Reloading... Sep 10 23:22:30.956294 zram_generator::config[2272]: No configuration found. Sep 10 23:22:31.198119 systemd[1]: Reloading finished in 317 ms. Sep 10 23:22:31.249788 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 23:22:31.249859 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 23:22:31.250118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:31.250159 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. Sep 10 23:22:31.251495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:31.359839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:31.363607 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:22:31.395072 kubelet[2314]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:31.395072 kubelet[2314]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 23:22:31.395072 kubelet[2314]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:31.395446 kubelet[2314]: I0910 23:22:31.395132 2314 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:22:33.046027 kubelet[2314]: I0910 23:22:33.044378 2314 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 23:22:33.046027 kubelet[2314]: I0910 23:22:33.044413 2314 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:22:33.046027 kubelet[2314]: I0910 23:22:33.044875 2314 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 23:22:33.064299 kubelet[2314]: E0910 23:22:33.064241 2314 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:22:33.065287 kubelet[2314]: I0910 23:22:33.065272 2314 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:22:33.073601 kubelet[2314]: I0910 23:22:33.073556 2314 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:22:33.077323 kubelet[2314]: I0910 23:22:33.077222 2314 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:22:33.078821 kubelet[2314]: I0910 23:22:33.078058 2314 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 23:22:33.078821 kubelet[2314]: I0910 23:22:33.078203 2314 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:22:33.078821 kubelet[2314]: I0910 23:22:33.078230 2314 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:22:33.078821 kubelet[2314]: I0910 23:22:33.078423 2314 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:22:33.079055 kubelet[2314]: I0910 23:22:33.078432 2314 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 23:22:33.079055 kubelet[2314]: I0910 23:22:33.078652 2314 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:33.081650 kubelet[2314]: I0910 23:22:33.081626 2314 kubelet.go:408] "Attempting to sync node with API server" Sep 10 23:22:33.082043 kubelet[2314]: I0910 23:22:33.082027 2314 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:22:33.082139 kubelet[2314]: I0910 23:22:33.082129 2314 kubelet.go:314] "Adding apiserver pod source" Sep 10 23:22:33.082239 kubelet[2314]: I0910 23:22:33.082230 2314 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:22:33.084228 kubelet[2314]: W0910 23:22:33.084158 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:22:33.084395 kubelet[2314]: E0910 23:22:33.084375 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:22:33.085367 kubelet[2314]: W0910 23:22:33.084684 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:22:33.085491 kubelet[2314]: E0910 23:22:33.085472 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:22:33.087740 kubelet[2314]: I0910 23:22:33.087717 2314 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:22:33.089086 kubelet[2314]: I0910 23:22:33.089066 2314 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:22:33.090007 kubelet[2314]: W0910 23:22:33.089972 2314 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:22:33.090955 kubelet[2314]: I0910 23:22:33.090939 2314 server.go:1274] "Started kubelet" Sep 10 23:22:33.092072 kubelet[2314]: I0910 23:22:33.092040 2314 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:22:33.092280 kubelet[2314]: I0910 23:22:33.092233 2314 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:22:33.092933 kubelet[2314]: I0910 23:22:33.092902 2314 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:22:33.094271 kubelet[2314]: I0910 23:22:33.093703 2314 server.go:449] "Adding debug handlers to kubelet server" Sep 10 23:22:33.094271 kubelet[2314]: I0910 23:22:33.094087 2314 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:22:33.094848 kubelet[2314]: I0910 23:22:33.094826 2314 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:22:33.096100 kubelet[2314]: I0910 23:22:33.096077 2314 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 23:22:33.096250 kubelet[2314]: I0910 23:22:33.096186 2314 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 23:22:33.096314 kubelet[2314]: I0910 23:22:33.096253 2314 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:22:33.096611 kubelet[2314]: W0910 23:22:33.096569 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:22:33.096660 kubelet[2314]: E0910 23:22:33.096620 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:22:33.097104 kubelet[2314]: E0910 23:22:33.097012 2314 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:33.097157 kubelet[2314]: E0910 23:22:33.097101 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Sep 10 23:22:33.098272 kubelet[2314]: I0910 23:22:33.098109 2314 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:22:33.098272 kubelet[2314]: I0910 23:22:33.098127 2314 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:22:33.098272 kubelet[2314]: E0910 23:22:33.098112 2314 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:22:33.098272 kubelet[2314]: I0910 23:22:33.098196 2314 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:22:33.099146 kubelet[2314]: E0910 23:22:33.097640 2314 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18640f473532429a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:22:33.090917018 +0000 UTC m=+1.724590481,LastTimestamp:2025-09-10 23:22:33.090917018 +0000 UTC m=+1.724590481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:22:33.108450 kubelet[2314]: I0910 23:22:33.108428 2314 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 23:22:33.108450 kubelet[2314]: I0910 23:22:33.108442 2314 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 23:22:33.108528 kubelet[2314]: I0910 23:22:33.108460 2314 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:33.109233 kubelet[2314]: I0910 23:22:33.109093 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:22:33.110188 kubelet[2314]: I0910 23:22:33.110170 2314 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:22:33.110296 kubelet[2314]: I0910 23:22:33.110284 2314 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 23:22:33.110367 kubelet[2314]: I0910 23:22:33.110357 2314 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 23:22:33.110466 kubelet[2314]: E0910 23:22:33.110444 2314 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:22:33.189350 kubelet[2314]: W0910 23:22:33.189280 2314 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 10 23:22:33.189462 kubelet[2314]: E0910 23:22:33.189359 2314 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:22:33.193312 kubelet[2314]: I0910 23:22:33.193283 2314 policy_none.go:49] "None policy: Start" Sep 10 23:22:33.194057 kubelet[2314]: I0910 23:22:33.194027 2314 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 23:22:33.194057 kubelet[2314]: I0910 23:22:33.194052 2314 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:22:33.197150 kubelet[2314]: E0910 23:22:33.197125 2314 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:33.200517 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:22:33.211410 kubelet[2314]: E0910 23:22:33.211368 2314 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:22:33.223123 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:22:33.226145 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:22:33.246069 kubelet[2314]: I0910 23:22:33.246042 2314 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:22:33.246287 kubelet[2314]: I0910 23:22:33.246249 2314 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:22:33.246390 kubelet[2314]: I0910 23:22:33.246354 2314 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:22:33.246617 kubelet[2314]: I0910 23:22:33.246596 2314 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:22:33.248627 kubelet[2314]: E0910 23:22:33.248607 2314 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:22:33.298647 kubelet[2314]: E0910 23:22:33.298312 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Sep 10 23:22:33.347597 kubelet[2314]: I0910 23:22:33.347560 2314 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:22:33.348051 kubelet[2314]: E0910 23:22:33.348009 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 23:22:33.420462 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 10 23:22:33.441800 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 10 23:22:33.468803 systemd[1]: Created slice kubepods-burstable-pod4917ad058d72d72420abdfc684a37682.slice - libcontainer container kubepods-burstable-pod4917ad058d72d72420abdfc684a37682.slice. Sep 10 23:22:33.497766 kubelet[2314]: I0910 23:22:33.497730 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:33.497887 kubelet[2314]: I0910 23:22:33.497779 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:33.497887 kubelet[2314]: I0910 23:22:33.497804 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:33.497887 kubelet[2314]: I0910 23:22:33.497822 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:33.497887 kubelet[2314]: I0910 23:22:33.497836 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:33.497887 kubelet[2314]: I0910 23:22:33.497860 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:33.497995 kubelet[2314]: I0910 23:22:33.497876 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:33.497995 kubelet[2314]: I0910 23:22:33.497892 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:33.497995 kubelet[2314]: I0910 23:22:33.497908 2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:33.549995 kubelet[2314]: I0910 23:22:33.549912 2314 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:22:33.550318 kubelet[2314]: E0910 23:22:33.550271 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 23:22:33.699747 kubelet[2314]: E0910 23:22:33.699706 2314 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Sep 10 23:22:33.741304 kubelet[2314]: E0910 23:22:33.740920 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.741559 containerd[1524]: time="2025-09-10T23:22:33.741490378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:33.758376 containerd[1524]: time="2025-09-10T23:22:33.758336018Z" level=info msg="connecting to shim 9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2" address="unix:///run/containerd/s/a158f9411e08e27d93332c2f1742a023db0f5f8068ee2a973c7764ae9b54d060" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:33.768045 kubelet[2314]: E0910 23:22:33.767784 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.768219 containerd[1524]: time="2025-09-10T23:22:33.768187978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:33.771541 kubelet[2314]: E0910 23:22:33.771519 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.772373 containerd[1524]: time="2025-09-10T23:22:33.772339578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4917ad058d72d72420abdfc684a37682,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:33.780457 systemd[1]: Started cri-containerd-9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2.scope - libcontainer container 9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2. Sep 10 23:22:33.801727 containerd[1524]: time="2025-09-10T23:22:33.801617578Z" level=info msg="connecting to shim 9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8" address="unix:///run/containerd/s/a11d8807101da8dca8c60546ba9ca6fcd30714601613c1eec21f03c56004e088" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:33.803652 containerd[1524]: time="2025-09-10T23:22:33.803601858Z" level=info msg="connecting to shim 5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0" address="unix:///run/containerd/s/8e7552e8cf07bb054d6372a76d21079a5f6472e3c3be35a1ec3fc699e73db0c8" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:33.830528 systemd[1]: Started cri-containerd-5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0.scope - libcontainer container 5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0. Sep 10 23:22:33.831862 systemd[1]: Started cri-containerd-9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8.scope - libcontainer container 9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8. Sep 10 23:22:33.833749 containerd[1524]: time="2025-09-10T23:22:33.833713418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2\"" Sep 10 23:22:33.834819 kubelet[2314]: E0910 23:22:33.834786 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.836721 containerd[1524]: time="2025-09-10T23:22:33.836687378Z" level=info msg="CreateContainer within sandbox \"9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:22:33.843415 containerd[1524]: time="2025-09-10T23:22:33.843386458Z" level=info msg="Container 1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:33.849625 containerd[1524]: time="2025-09-10T23:22:33.849589698Z" level=info msg="CreateContainer within sandbox \"9521229234ea20b960b8b2b350c38852dac7eec85b6608b8a49f29dee50d9fa2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d\"" Sep 10 23:22:33.850096 containerd[1524]: time="2025-09-10T23:22:33.850070778Z" level=info msg="StartContainer for \"1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d\"" Sep 10 23:22:33.851097 containerd[1524]: time="2025-09-10T23:22:33.851061138Z" level=info msg="connecting to shim 1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d" address="unix:///run/containerd/s/a158f9411e08e27d93332c2f1742a023db0f5f8068ee2a973c7764ae9b54d060" protocol=ttrpc version=3 Sep 10 23:22:33.868457 containerd[1524]: time="2025-09-10T23:22:33.868409098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0\"" Sep 10 23:22:33.869734 kubelet[2314]: E0910 23:22:33.869610 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.871465 containerd[1524]: time="2025-09-10T23:22:33.871430298Z" level=info msg="CreateContainer within sandbox \"5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:22:33.873517 containerd[1524]: time="2025-09-10T23:22:33.873490218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4917ad058d72d72420abdfc684a37682,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8\"" Sep 10 23:22:33.874102 kubelet[2314]: E0910 23:22:33.874081 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:33.874431 systemd[1]: Started cri-containerd-1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d.scope - libcontainer container 1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d. Sep 10 23:22:33.876142 containerd[1524]: time="2025-09-10T23:22:33.876116218Z" level=info msg="CreateContainer within sandbox \"9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:22:33.880437 containerd[1524]: time="2025-09-10T23:22:33.880410178Z" level=info msg="Container 9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:33.888578 containerd[1524]: time="2025-09-10T23:22:33.888539178Z" level=info msg="Container 1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:33.890659 containerd[1524]: time="2025-09-10T23:22:33.890604858Z" level=info msg="CreateContainer within sandbox \"5793eb39b3c2be9d2c888fc146062995e35ac3cd4da3c5512fc84068613aa5b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e\"" Sep 10 23:22:33.891071 containerd[1524]: time="2025-09-10T23:22:33.891031578Z" level=info msg="StartContainer for \"9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e\"" Sep 10 23:22:33.892225 containerd[1524]: time="2025-09-10T23:22:33.892190538Z" level=info msg="connecting to shim 9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e" address="unix:///run/containerd/s/8e7552e8cf07bb054d6372a76d21079a5f6472e3c3be35a1ec3fc699e73db0c8" protocol=ttrpc version=3 Sep 10 23:22:33.895141 containerd[1524]: time="2025-09-10T23:22:33.895085498Z" level=info msg="CreateContainer within sandbox \"9f2493c3adf4aad1de98a8da3a6d371708a36516f2d101a2910f768428e032a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac\"" Sep 10 23:22:33.896504 containerd[1524]: time="2025-09-10T23:22:33.896431818Z" level=info msg="StartContainer for \"1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac\"" Sep 10 23:22:33.898612 containerd[1524]: time="2025-09-10T23:22:33.898581698Z" level=info msg="connecting to shim 1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac" address="unix:///run/containerd/s/a11d8807101da8dca8c60546ba9ca6fcd30714601613c1eec21f03c56004e088" protocol=ttrpc version=3 Sep 10 23:22:33.909438 systemd[1]: Started cri-containerd-9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e.scope - libcontainer container 9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e. Sep 10 23:22:33.923466 systemd[1]: Started cri-containerd-1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac.scope - libcontainer container 1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac. Sep 10 23:22:33.932414 containerd[1524]: time="2025-09-10T23:22:33.932377178Z" level=info msg="StartContainer for \"1e58cc86601fca429d89a783cce48a368229eb8fd64099fce9a72bfe8dd9714d\" returns successfully" Sep 10 23:22:33.952857 kubelet[2314]: I0910 23:22:33.952823 2314 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:22:33.953288 kubelet[2314]: E0910 23:22:33.953188 2314 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 10 23:22:33.960945 containerd[1524]: time="2025-09-10T23:22:33.960862938Z" level=info msg="StartContainer for \"9b7043888656460897907ef4823c5ba0a3b017e95d69bf2c0ea870767a00386e\" returns successfully" Sep 10 23:22:33.975966 containerd[1524]: time="2025-09-10T23:22:33.975669298Z" level=info msg="StartContainer for \"1127b58b99eea3e3dff5181ca813d767e118f6685cd5bee77a0dc6a9c1e892ac\" returns successfully" Sep 10 23:22:34.124987 kubelet[2314]: E0910 23:22:34.124708 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:34.130017 kubelet[2314]: E0910 23:22:34.129992 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:34.130744 kubelet[2314]: E0910 23:22:34.130725 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:34.755302 kubelet[2314]: I0910 23:22:34.755254 2314 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:22:35.132809 kubelet[2314]: E0910 23:22:35.132781 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:35.133630 kubelet[2314]: E0910 23:22:35.133608 2314 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:35.527315 kubelet[2314]: E0910 23:22:35.525066 2314 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:22:35.582787 kubelet[2314]: I0910 23:22:35.582735 2314 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 23:22:36.086188 kubelet[2314]: I0910 23:22:36.086138 2314 apiserver.go:52] "Watching apiserver" Sep 10 23:22:36.096828 kubelet[2314]: I0910 23:22:36.096790 2314 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 23:22:37.297825 systemd[1]: Reload requested from client PID 2588 ('systemctl') (unit session-7.scope)... Sep 10 23:22:37.297838 systemd[1]: Reloading... Sep 10 23:22:37.362304 zram_generator::config[2630]: No configuration found. Sep 10 23:22:37.551394 systemd[1]: Reloading finished in 253 ms. Sep 10 23:22:37.576413 kubelet[2314]: I0910 23:22:37.576384 2314 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:22:37.576660 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:37.591152 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:22:37.592371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:37.592434 systemd[1]: kubelet.service: Consumed 2.067s CPU time, 127.5M memory peak. Sep 10 23:22:37.594409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:37.771460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:37.775206 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:22:37.818727 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:37.818727 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 23:22:37.818727 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:37.819057 kubelet[2672]: I0910 23:22:37.818814 2672 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:22:37.827167 kubelet[2672]: I0910 23:22:37.827116 2672 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 23:22:37.827167 kubelet[2672]: I0910 23:22:37.827149 2672 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:22:37.827417 kubelet[2672]: I0910 23:22:37.827408 2672 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 23:22:37.828805 kubelet[2672]: I0910 23:22:37.828787 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:22:37.831558 kubelet[2672]: I0910 23:22:37.831380 2672 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:22:37.836414 kubelet[2672]: I0910 23:22:37.836333 2672 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:22:37.839573 kubelet[2672]: I0910 23:22:37.839547 2672 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:22:37.839697 kubelet[2672]: I0910 23:22:37.839682 2672 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 23:22:37.839860 kubelet[2672]: I0910 23:22:37.839800 2672 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:22:37.840105 kubelet[2672]: I0910 23:22:37.839839 2672 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:22:37.840190 kubelet[2672]: I0910 23:22:37.840107 2672 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:22:37.840190 kubelet[2672]: I0910 23:22:37.840118 2672 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 23:22:37.840190 kubelet[2672]: I0910 23:22:37.840166 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:37.840309 kubelet[2672]: I0910 23:22:37.840296 2672 kubelet.go:408] "Attempting to sync node with API server" Sep 10 23:22:37.840353 kubelet[2672]: I0910 23:22:37.840320 2672 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:22:37.840353 kubelet[2672]: I0910 23:22:37.840340 2672 kubelet.go:314] "Adding apiserver pod source" Sep 10 23:22:37.840406 kubelet[2672]: I0910 23:22:37.840370 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:22:37.842334 kubelet[2672]: I0910 23:22:37.842304 2672 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:22:37.843089 kubelet[2672]: I0910 23:22:37.843069 2672 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:22:37.843808 kubelet[2672]: I0910 23:22:37.843784 2672 server.go:1274] "Started kubelet" Sep 10 23:22:37.846206 kubelet[2672]: I0910 23:22:37.846184 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:22:37.846571 kubelet[2672]: I0910 23:22:37.846546 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:22:37.846776 kubelet[2672]: I0910 23:22:37.846737 2672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:22:37.850408 kubelet[2672]: I0910 23:22:37.847975 2672 server.go:449] "Adding debug handlers to kubelet server" Sep 10 23:22:37.857130 kubelet[2672]: I0910 23:22:37.856803 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:22:37.857130 kubelet[2672]: I0910 23:22:37.857043 2672 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:22:37.857245 kubelet[2672]: I0910 23:22:37.857200 2672 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 23:22:37.858014 kubelet[2672]: I0910 23:22:37.857968 2672 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 23:22:37.859449 kubelet[2672]: I0910 23:22:37.858185 2672 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:22:37.860399 kubelet[2672]: E0910 23:22:37.860272 2672 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:37.861082 kubelet[2672]: I0910 23:22:37.861059 2672 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:22:37.863391 kubelet[2672]: I0910 23:22:37.863365 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:22:37.865072 kubelet[2672]: I0910 23:22:37.865048 2672 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:22:37.871961 kubelet[2672]: E0910 23:22:37.871932 2672 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:22:37.875377 kubelet[2672]: I0910 23:22:37.875341 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:22:37.876911 kubelet[2672]: I0910 23:22:37.876883 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:22:37.876969 kubelet[2672]: I0910 23:22:37.876954 2672 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 23:22:37.877002 kubelet[2672]: I0910 23:22:37.876981 2672 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 23:22:37.877087 kubelet[2672]: E0910 23:22:37.877026 2672 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:22:37.915694 kubelet[2672]: I0910 23:22:37.915655 2672 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 23:22:37.915817 kubelet[2672]: I0910 23:22:37.915805 2672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 23:22:37.915890 kubelet[2672]: I0910 23:22:37.915881 2672 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:37.916085 kubelet[2672]: I0910 23:22:37.916069 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:22:37.916162 kubelet[2672]: I0910 23:22:37.916136 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:22:37.916210 kubelet[2672]: I0910 23:22:37.916202 2672 policy_none.go:49] "None policy: Start" Sep 10 23:22:37.916988 kubelet[2672]: I0910 23:22:37.916968 2672 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 23:22:37.916988 kubelet[2672]: I0910 23:22:37.916991 2672 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:22:37.917143 kubelet[2672]: I0910 23:22:37.917127 2672 state_mem.go:75] "Updated machine memory state" Sep 10 23:22:37.921495 kubelet[2672]: I0910 23:22:37.921456 2672 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:22:37.921638 kubelet[2672]: I0910 23:22:37.921621 2672 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:22:37.921690 kubelet[2672]: I0910 23:22:37.921639 2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:22:37.921891 kubelet[2672]: I0910 23:22:37.921878 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:22:38.026146 kubelet[2672]: I0910 23:22:38.026100 2672 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 23:22:38.037320 kubelet[2672]: I0910 23:22:38.036751 2672 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 23:22:38.037320 kubelet[2672]: I0910 23:22:38.036865 2672 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 23:22:38.059239 kubelet[2672]: I0910 23:22:38.059203 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:38.059447 kubelet[2672]: I0910 23:22:38.059424 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:38.059525 kubelet[2672]: I0910 23:22:38.059511 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:38.059588 kubelet[2672]: I0910 23:22:38.059575 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:38.059650 kubelet[2672]: I0910 23:22:38.059637 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:38.059713 kubelet[2672]: I0910 23:22:38.059700 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:38.059788 kubelet[2672]: I0910 23:22:38.059775 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4917ad058d72d72420abdfc684a37682-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4917ad058d72d72420abdfc684a37682\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:38.059851 kubelet[2672]: I0910 23:22:38.059838 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:38.059915 kubelet[2672]: I0910 23:22:38.059899 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:38.286361 kubelet[2672]: E0910 23:22:38.286189 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.288853 kubelet[2672]: E0910 23:22:38.288738 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.289702 kubelet[2672]: E0910 23:22:38.289673 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.841388 kubelet[2672]: I0910 23:22:38.841350 2672 apiserver.go:52] "Watching apiserver" Sep 10 23:22:38.858158 kubelet[2672]: I0910 23:22:38.858109 2672 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 23:22:38.895941 kubelet[2672]: E0910 23:22:38.895755 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.895941 kubelet[2672]: E0910 23:22:38.895867 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.900508 kubelet[2672]: E0910 23:22:38.900433 2672 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:38.900617 kubelet[2672]: E0910 23:22:38.900591 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:38.936886 kubelet[2672]: I0910 23:22:38.936799 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.936782498 podStartE2EDuration="1.936782498s" podCreationTimestamp="2025-09-10 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:38.929913498 +0000 UTC m=+1.150867201" watchObservedRunningTime="2025-09-10 23:22:38.936782498 +0000 UTC m=+1.157736201" Sep 10 23:22:38.943860 kubelet[2672]: I0910 23:22:38.943616 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.943600258 podStartE2EDuration="1.943600258s" podCreationTimestamp="2025-09-10 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:38.936950498 +0000 UTC m=+1.157904201" watchObservedRunningTime="2025-09-10 23:22:38.943600258 +0000 UTC m=+1.164553961" Sep 10 23:22:38.943860 kubelet[2672]: I0910 23:22:38.943720 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.943714978 podStartE2EDuration="1.943714978s" podCreationTimestamp="2025-09-10 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:38.943393978 +0000 UTC m=+1.164347681" watchObservedRunningTime="2025-09-10 23:22:38.943714978 +0000 UTC m=+1.164668681" Sep 10 23:22:39.897431 kubelet[2672]: E0910 23:22:39.897400 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:40.300453 kubelet[2672]: E0910 23:22:40.300354 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:41.294673 kubelet[2672]: E0910 23:22:41.294623 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:41.537622 kubelet[2672]: E0910 23:22:41.537576 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:41.635973 kubelet[2672]: I0910 23:22:41.635944 2672 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:22:41.636222 containerd[1524]: time="2025-09-10T23:22:41.636185449Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:22:41.636573 kubelet[2672]: I0910 23:22:41.636376 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:22:42.457930 systemd[1]: Created slice kubepods-besteffort-pod1fb7c126_f8c4_4d5d_a9e7_b05cd4b29d60.slice - libcontainer container kubepods-besteffort-pod1fb7c126_f8c4_4d5d_a9e7_b05cd4b29d60.slice. Sep 10 23:22:42.492347 kubelet[2672]: I0910 23:22:42.492295 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gvz\" (UniqueName: \"kubernetes.io/projected/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-kube-api-access-f9gvz\") pod \"kube-proxy-dtxlk\" (UID: \"1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60\") " pod="kube-system/kube-proxy-dtxlk" Sep 10 23:22:42.492347 kubelet[2672]: I0910 23:22:42.492344 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-kube-proxy\") pod \"kube-proxy-dtxlk\" (UID: \"1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60\") " pod="kube-system/kube-proxy-dtxlk" Sep 10 23:22:42.492751 kubelet[2672]: I0910 23:22:42.492368 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-xtables-lock\") pod \"kube-proxy-dtxlk\" (UID: \"1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60\") " pod="kube-system/kube-proxy-dtxlk" Sep 10 23:22:42.492751 kubelet[2672]: I0910 23:22:42.492384 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-lib-modules\") pod \"kube-proxy-dtxlk\" (UID: \"1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60\") " pod="kube-system/kube-proxy-dtxlk" Sep 10 23:22:42.665601 kubelet[2672]: E0910 23:22:42.665551 2672 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 10 23:22:42.665601 kubelet[2672]: E0910 23:22:42.665588 2672 projected.go:194] Error preparing data for projected volume kube-api-access-f9gvz for pod kube-system/kube-proxy-dtxlk: configmap "kube-root-ca.crt" not found Sep 10 23:22:42.665748 kubelet[2672]: E0910 23:22:42.665642 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-kube-api-access-f9gvz podName:1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60 nodeName:}" failed. No retries permitted until 2025-09-10 23:22:43.165620852 +0000 UTC m=+5.386574555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f9gvz" (UniqueName: "kubernetes.io/projected/1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60-kube-api-access-f9gvz") pod "kube-proxy-dtxlk" (UID: "1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60") : configmap "kube-root-ca.crt" not found Sep 10 23:22:42.840308 systemd[1]: Created slice kubepods-besteffort-pod65a0bbfe_3928_41c9_bcaa_b6dd67ce1896.slice - libcontainer container kubepods-besteffort-pod65a0bbfe_3928_41c9_bcaa_b6dd67ce1896.slice. Sep 10 23:22:42.895970 kubelet[2672]: I0910 23:22:42.895923 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65a0bbfe-3928-41c9-bcaa-b6dd67ce1896-var-lib-calico\") pod \"tigera-operator-58fc44c59b-jdxbs\" (UID: \"65a0bbfe-3928-41c9-bcaa-b6dd67ce1896\") " pod="tigera-operator/tigera-operator-58fc44c59b-jdxbs" Sep 10 23:22:42.895970 kubelet[2672]: I0910 23:22:42.895975 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhvl\" (UniqueName: \"kubernetes.io/projected/65a0bbfe-3928-41c9-bcaa-b6dd67ce1896-kube-api-access-vdhvl\") pod \"tigera-operator-58fc44c59b-jdxbs\" (UID: \"65a0bbfe-3928-41c9-bcaa-b6dd67ce1896\") " pod="tigera-operator/tigera-operator-58fc44c59b-jdxbs" Sep 10 23:22:43.143563 containerd[1524]: time="2025-09-10T23:22:43.143454185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jdxbs,Uid:65a0bbfe-3928-41c9-bcaa-b6dd67ce1896,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:22:43.163048 containerd[1524]: time="2025-09-10T23:22:43.163006918Z" level=info msg="connecting to shim 63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0" address="unix:///run/containerd/s/82d01b0762f93b52a62712a61919bba839b760191e8e298a601c514f1088977a" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:43.189444 systemd[1]: Started cri-containerd-63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0.scope - libcontainer container 63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0. Sep 10 23:22:43.222175 containerd[1524]: time="2025-09-10T23:22:43.222127277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jdxbs,Uid:65a0bbfe-3928-41c9-bcaa-b6dd67ce1896,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0\"" Sep 10 23:22:43.224125 containerd[1524]: time="2025-09-10T23:22:43.224100430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:22:43.366807 kubelet[2672]: E0910 23:22:43.366757 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:43.367713 containerd[1524]: time="2025-09-10T23:22:43.367402061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtxlk,Uid:1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:43.382436 containerd[1524]: time="2025-09-10T23:22:43.382395729Z" level=info msg="connecting to shim e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893" address="unix:///run/containerd/s/9b6d7403b069a055af800e4762d6880e401b6af8a9f7172cc3ecd4d377dddf49" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:43.406441 systemd[1]: Started cri-containerd-e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893.scope - libcontainer container e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893. Sep 10 23:22:43.427343 containerd[1524]: time="2025-09-10T23:22:43.427281216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtxlk,Uid:1fb7c126-f8c4-4d5d-a9e7-b05cd4b29d60,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893\"" Sep 10 23:22:43.428237 kubelet[2672]: E0910 23:22:43.428202 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:43.432320 containerd[1524]: time="2025-09-10T23:22:43.431809721Z" level=info msg="CreateContainer within sandbox \"e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:22:43.440361 containerd[1524]: time="2025-09-10T23:22:43.440316372Z" level=info msg="Container b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:43.447496 containerd[1524]: time="2025-09-10T23:22:43.447442827Z" level=info msg="CreateContainer within sandbox \"e9c672cfae377638905e38710761398f4d48d47d9a2484a63ae0625ffe44b893\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df\"" Sep 10 23:22:43.448111 containerd[1524]: time="2025-09-10T23:22:43.448021905Z" level=info msg="StartContainer for \"b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df\"" Sep 10 23:22:43.449531 containerd[1524]: time="2025-09-10T23:22:43.449496740Z" level=info msg="connecting to shim b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df" address="unix:///run/containerd/s/9b6d7403b069a055af800e4762d6880e401b6af8a9f7172cc3ecd4d377dddf49" protocol=ttrpc version=3 Sep 10 23:22:43.471462 systemd[1]: Started cri-containerd-b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df.scope - libcontainer container b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df. Sep 10 23:22:43.506443 containerd[1524]: time="2025-09-10T23:22:43.506083267Z" level=info msg="StartContainer for \"b49b3c556222d053f6bf5444ad102c856992a7879b3a827ed684d2ff3e5b02df\" returns successfully" Sep 10 23:22:43.907194 kubelet[2672]: E0910 23:22:43.907105 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:44.243460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513655487.mount: Deactivated successfully. Sep 10 23:22:44.696047 containerd[1524]: time="2025-09-10T23:22:44.696000754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:44.696783 containerd[1524]: time="2025-09-10T23:22:44.696752792Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:22:44.697727 containerd[1524]: time="2025-09-10T23:22:44.697678149Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:44.700307 containerd[1524]: time="2025-09-10T23:22:44.700244500Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:44.701150 containerd[1524]: time="2025-09-10T23:22:44.701102858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.476969268s" Sep 10 23:22:44.701150 containerd[1524]: time="2025-09-10T23:22:44.701140058Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:22:44.703898 containerd[1524]: time="2025-09-10T23:22:44.703859689Z" level=info msg="CreateContainer within sandbox \"63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:22:44.713500 containerd[1524]: time="2025-09-10T23:22:44.713455578Z" level=info msg="Container 6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:44.715681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950518646.mount: Deactivated successfully. Sep 10 23:22:44.719181 containerd[1524]: time="2025-09-10T23:22:44.719122720Z" level=info msg="CreateContainer within sandbox \"63f64edcffe73c370ea33c76e17d19f735543ec0fa61cd3ed95f529180ca71c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640\"" Sep 10 23:22:44.719918 containerd[1524]: time="2025-09-10T23:22:44.719812958Z" level=info msg="StartContainer for \"6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640\"" Sep 10 23:22:44.720916 containerd[1524]: time="2025-09-10T23:22:44.720878434Z" level=info msg="connecting to shim 6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640" address="unix:///run/containerd/s/82d01b0762f93b52a62712a61919bba839b760191e8e298a601c514f1088977a" protocol=ttrpc version=3 Sep 10 23:22:44.758520 systemd[1]: Started cri-containerd-6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640.scope - libcontainer container 6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640. Sep 10 23:22:44.782175 containerd[1524]: time="2025-09-10T23:22:44.782131078Z" level=info msg="StartContainer for \"6a45b32e0aa2dbd965802623e37af8361d72c6bfa205febf68bd32d55dd6d640\" returns successfully" Sep 10 23:22:44.921128 kubelet[2672]: I0910 23:22:44.920674 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dtxlk" podStartSLOduration=2.9206559949999997 podStartE2EDuration="2.920655995s" podCreationTimestamp="2025-09-10 23:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:43.916933665 +0000 UTC m=+6.137887328" watchObservedRunningTime="2025-09-10 23:22:44.920655995 +0000 UTC m=+7.141609698" Sep 10 23:22:44.921128 kubelet[2672]: I0910 23:22:44.920783 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-jdxbs" podStartSLOduration=1.442400492 podStartE2EDuration="2.920779235s" podCreationTimestamp="2025-09-10 23:22:42 +0000 UTC" firstStartedPulling="2025-09-10 23:22:43.223443232 +0000 UTC m=+5.444396895" lastFinishedPulling="2025-09-10 23:22:44.701821975 +0000 UTC m=+6.922775638" observedRunningTime="2025-09-10 23:22:44.920493596 +0000 UTC m=+7.141447299" watchObservedRunningTime="2025-09-10 23:22:44.920779235 +0000 UTC m=+7.141732938" Sep 10 23:22:50.023433 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:50.025334 sshd[1739]: Connection closed by 10.0.0.1 port 45338 Sep 10 23:22:50.027005 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:50.031214 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:22:50.031472 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:45338.service: Deactivated successfully. Sep 10 23:22:50.035120 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:22:50.035365 systemd[1]: session-7.scope: Consumed 6.756s CPU time, 214.7M memory peak. Sep 10 23:22:50.037158 systemd-logind[1504]: Removed session 7. Sep 10 23:22:50.323956 kubelet[2672]: E0910 23:22:50.323896 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:51.303864 kubelet[2672]: E0910 23:22:51.303828 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:51.547519 kubelet[2672]: E0910 23:22:51.547486 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:53.063372 update_engine[1513]: I20250910 23:22:53.063297 1513 update_attempter.cc:509] Updating boot flags... Sep 10 23:22:54.582354 systemd[1]: Created slice kubepods-besteffort-pod0078d4ca_07c5_4982_a28d_54498bdea713.slice - libcontainer container kubepods-besteffort-pod0078d4ca_07c5_4982_a28d_54498bdea713.slice. Sep 10 23:22:54.764064 kubelet[2672]: I0910 23:22:54.763973 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0078d4ca-07c5-4982-a28d-54498bdea713-typha-certs\") pod \"calico-typha-75b97687b8-wgb28\" (UID: \"0078d4ca-07c5-4982-a28d-54498bdea713\") " pod="calico-system/calico-typha-75b97687b8-wgb28" Sep 10 23:22:54.764064 kubelet[2672]: I0910 23:22:54.764025 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0078d4ca-07c5-4982-a28d-54498bdea713-tigera-ca-bundle\") pod \"calico-typha-75b97687b8-wgb28\" (UID: \"0078d4ca-07c5-4982-a28d-54498bdea713\") " pod="calico-system/calico-typha-75b97687b8-wgb28" Sep 10 23:22:54.764584 kubelet[2672]: I0910 23:22:54.764114 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vgw\" (UniqueName: \"kubernetes.io/projected/0078d4ca-07c5-4982-a28d-54498bdea713-kube-api-access-p7vgw\") pod \"calico-typha-75b97687b8-wgb28\" (UID: \"0078d4ca-07c5-4982-a28d-54498bdea713\") " pod="calico-system/calico-typha-75b97687b8-wgb28" Sep 10 23:22:54.887545 kubelet[2672]: E0910 23:22:54.887404 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:54.891071 containerd[1524]: time="2025-09-10T23:22:54.890801856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b97687b8-wgb28,Uid:0078d4ca-07c5-4982-a28d-54498bdea713,Namespace:calico-system,Attempt:0,}" Sep 10 23:22:54.929199 containerd[1524]: time="2025-09-10T23:22:54.929146712Z" level=info msg="connecting to shim db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9" address="unix:///run/containerd/s/06c71d51d69fa5d278e9f28b918b22d49140fd4f5d5a1814da932d1ff4c2a64f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:54.993457 systemd[1]: Started cri-containerd-db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9.scope - libcontainer container db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9. Sep 10 23:22:54.998395 systemd[1]: Created slice kubepods-besteffort-podfa908c6c_bcf8_4198_9a9b_8b98ea77fca8.slice - libcontainer container kubepods-besteffort-podfa908c6c_bcf8_4198_9a9b_8b98ea77fca8.slice. Sep 10 23:22:55.035005 containerd[1524]: time="2025-09-10T23:22:55.034875098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b97687b8-wgb28,Uid:0078d4ca-07c5-4982-a28d-54498bdea713,Namespace:calico-system,Attempt:0,} returns sandbox id \"db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9\"" Sep 10 23:22:55.038746 kubelet[2672]: E0910 23:22:55.038713 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:55.041490 containerd[1524]: time="2025-09-10T23:22:55.041183048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:22:55.167146 kubelet[2672]: I0910 23:22:55.167040 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-flexvol-driver-host\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167146 kubelet[2672]: I0910 23:22:55.167090 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-policysync\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167146 kubelet[2672]: I0910 23:22:55.167110 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-var-lib-calico\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167322 kubelet[2672]: I0910 23:22:55.167159 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-lib-modules\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167322 kubelet[2672]: I0910 23:22:55.167193 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjxs\" (UniqueName: \"kubernetes.io/projected/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-kube-api-access-lhjxs\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167322 kubelet[2672]: I0910 23:22:55.167218 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-tigera-ca-bundle\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167322 kubelet[2672]: I0910 23:22:55.167250 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-cni-log-dir\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167322 kubelet[2672]: I0910 23:22:55.167285 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-node-certs\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167424 kubelet[2672]: I0910 23:22:55.167310 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-cni-net-dir\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167424 kubelet[2672]: I0910 23:22:55.167331 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-cni-bin-dir\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167424 kubelet[2672]: I0910 23:22:55.167347 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-var-run-calico\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.167424 kubelet[2672]: I0910 23:22:55.167362 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa908c6c-bcf8-4198-9a9b-8b98ea77fca8-xtables-lock\") pod \"calico-node-slwqs\" (UID: \"fa908c6c-bcf8-4198-9a9b-8b98ea77fca8\") " pod="calico-system/calico-node-slwqs" Sep 10 23:22:55.266917 kubelet[2672]: E0910 23:22:55.266833 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59n6n" podUID="56b76e01-e81c-4847-8f88-e9e155779575" Sep 10 23:22:55.268709 kubelet[2672]: I0910 23:22:55.268427 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56b76e01-e81c-4847-8f88-e9e155779575-registration-dir\") pod \"csi-node-driver-59n6n\" (UID: \"56b76e01-e81c-4847-8f88-e9e155779575\") " pod="calico-system/csi-node-driver-59n6n" Sep 10 23:22:55.268709 kubelet[2672]: I0910 23:22:55.268458 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56b76e01-e81c-4847-8f88-e9e155779575-varrun\") pod \"csi-node-driver-59n6n\" (UID: \"56b76e01-e81c-4847-8f88-e9e155779575\") " pod="calico-system/csi-node-driver-59n6n" Sep 10 23:22:55.268709 kubelet[2672]: I0910 23:22:55.268485 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrgps\" (UniqueName: \"kubernetes.io/projected/56b76e01-e81c-4847-8f88-e9e155779575-kube-api-access-qrgps\") pod \"csi-node-driver-59n6n\" (UID: \"56b76e01-e81c-4847-8f88-e9e155779575\") " pod="calico-system/csi-node-driver-59n6n" Sep 10 23:22:55.268709 kubelet[2672]: I0910 23:22:55.268547 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56b76e01-e81c-4847-8f88-e9e155779575-socket-dir\") pod \"csi-node-driver-59n6n\" (UID: \"56b76e01-e81c-4847-8f88-e9e155779575\") " pod="calico-system/csi-node-driver-59n6n" Sep 10 23:22:55.268709 kubelet[2672]: I0910 23:22:55.268577 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56b76e01-e81c-4847-8f88-e9e155779575-kubelet-dir\") pod \"csi-node-driver-59n6n\" (UID: \"56b76e01-e81c-4847-8f88-e9e155779575\") " pod="calico-system/csi-node-driver-59n6n" Sep 10 23:22:55.272373 kubelet[2672]: E0910 23:22:55.272346 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.272373 kubelet[2672]: W0910 23:22:55.272368 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.272539 kubelet[2672]: E0910 23:22:55.272386 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.273070 kubelet[2672]: E0910 23:22:55.273050 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.273070 kubelet[2672]: W0910 23:22:55.273066 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.273149 kubelet[2672]: E0910 23:22:55.273079 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.273464 kubelet[2672]: E0910 23:22:55.273445 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.273464 kubelet[2672]: W0910 23:22:55.273460 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.273541 kubelet[2672]: E0910 23:22:55.273472 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.275830 kubelet[2672]: E0910 23:22:55.275809 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.275830 kubelet[2672]: W0910 23:22:55.275826 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.275916 kubelet[2672]: E0910 23:22:55.275841 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.278412 kubelet[2672]: E0910 23:22:55.278384 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.278412 kubelet[2672]: W0910 23:22:55.278404 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.278505 kubelet[2672]: E0910 23:22:55.278420 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.285748 kubelet[2672]: E0910 23:22:55.285729 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.285901 kubelet[2672]: W0910 23:22:55.285845 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.285901 kubelet[2672]: E0910 23:22:55.285867 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.303922 containerd[1524]: time="2025-09-10T23:22:55.303831595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slwqs,Uid:fa908c6c-bcf8-4198-9a9b-8b98ea77fca8,Namespace:calico-system,Attempt:0,}" Sep 10 23:22:55.326596 containerd[1524]: time="2025-09-10T23:22:55.325340801Z" level=info msg="connecting to shim 35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030" address="unix:///run/containerd/s/f8a99070bc5e9e050c68879def663a22b5934f049e7e11a62cdd955e366578ad" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:55.348465 systemd[1]: Started cri-containerd-35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030.scope - libcontainer container 35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030. Sep 10 23:22:55.369623 kubelet[2672]: E0910 23:22:55.369598 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.369623 kubelet[2672]: W0910 23:22:55.369619 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.369758 kubelet[2672]: E0910 23:22:55.369637 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.369956 kubelet[2672]: E0910 23:22:55.369941 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.369956 kubelet[2672]: W0910 23:22:55.369952 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.370027 kubelet[2672]: E0910 23:22:55.369965 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.370178 kubelet[2672]: E0910 23:22:55.370160 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.370178 kubelet[2672]: W0910 23:22:55.370171 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.370178 kubelet[2672]: E0910 23:22:55.370180 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.370424 kubelet[2672]: E0910 23:22:55.370404 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.370424 kubelet[2672]: W0910 23:22:55.370416 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.370424 kubelet[2672]: E0910 23:22:55.370429 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.370716 kubelet[2672]: E0910 23:22:55.370615 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.370716 kubelet[2672]: W0910 23:22:55.370625 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.370716 kubelet[2672]: E0910 23:22:55.370636 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.370804 containerd[1524]: time="2025-09-10T23:22:55.370620449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-slwqs,Uid:fa908c6c-bcf8-4198-9a9b-8b98ea77fca8,Namespace:calico-system,Attempt:0,} returns sandbox id \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\"" Sep 10 23:22:55.370921 kubelet[2672]: E0910 23:22:55.370845 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.370921 kubelet[2672]: W0910 23:22:55.370870 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.371000 kubelet[2672]: E0910 23:22:55.370939 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.371344 kubelet[2672]: E0910 23:22:55.371326 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.371344 kubelet[2672]: W0910 23:22:55.371342 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.371662 kubelet[2672]: E0910 23:22:55.371386 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.371751 kubelet[2672]: E0910 23:22:55.371709 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.371751 kubelet[2672]: W0910 23:22:55.371724 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.371805 kubelet[2672]: E0910 23:22:55.371776 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.372066 kubelet[2672]: E0910 23:22:55.372047 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.372066 kubelet[2672]: W0910 23:22:55.372062 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.372136 kubelet[2672]: E0910 23:22:55.372124 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.372372 kubelet[2672]: E0910 23:22:55.372349 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.372425 kubelet[2672]: W0910 23:22:55.372364 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.372566 kubelet[2672]: E0910 23:22:55.372444 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.372731 kubelet[2672]: E0910 23:22:55.372714 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.372731 kubelet[2672]: W0910 23:22:55.372731 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.372844 kubelet[2672]: E0910 23:22:55.372822 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.372981 kubelet[2672]: E0910 23:22:55.372968 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.372981 kubelet[2672]: W0910 23:22:55.372981 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.373036 kubelet[2672]: E0910 23:22:55.373019 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.373211 kubelet[2672]: E0910 23:22:55.373199 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.373249 kubelet[2672]: W0910 23:22:55.373211 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.373395 kubelet[2672]: E0910 23:22:55.373365 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.373438 kubelet[2672]: E0910 23:22:55.373419 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.373438 kubelet[2672]: W0910 23:22:55.373429 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.373528 kubelet[2672]: E0910 23:22:55.373502 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.373604 kubelet[2672]: E0910 23:22:55.373592 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.373604 kubelet[2672]: W0910 23:22:55.373603 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.373659 kubelet[2672]: E0910 23:22:55.373614 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.373755 kubelet[2672]: E0910 23:22:55.373733 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.373755 kubelet[2672]: W0910 23:22:55.373746 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.373796 kubelet[2672]: E0910 23:22:55.373756 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.373950 kubelet[2672]: E0910 23:22:55.373938 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.373950 kubelet[2672]: W0910 23:22:55.373950 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374024 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374137 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.374720 kubelet[2672]: W0910 23:22:55.374151 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374198 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374439 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.374720 kubelet[2672]: W0910 23:22:55.374451 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374518 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374655 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.374720 kubelet[2672]: W0910 23:22:55.374666 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.374720 kubelet[2672]: E0910 23:22:55.374687 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.374952 kubelet[2672]: E0910 23:22:55.374864 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.374952 kubelet[2672]: W0910 23:22:55.374874 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.374952 kubelet[2672]: E0910 23:22:55.374887 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.375053 kubelet[2672]: E0910 23:22:55.375026 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.375053 kubelet[2672]: W0910 23:22:55.375039 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.375053 kubelet[2672]: E0910 23:22:55.375053 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.375355 kubelet[2672]: E0910 23:22:55.375316 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.375355 kubelet[2672]: W0910 23:22:55.375332 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.375457 kubelet[2672]: E0910 23:22:55.375363 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.375569 kubelet[2672]: E0910 23:22:55.375556 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.375731 kubelet[2672]: W0910 23:22:55.375568 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.375731 kubelet[2672]: E0910 23:22:55.375588 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.376222 kubelet[2672]: E0910 23:22:55.376100 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.376222 kubelet[2672]: W0910 23:22:55.376117 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.376222 kubelet[2672]: E0910 23:22:55.376130 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:55.385777 kubelet[2672]: E0910 23:22:55.385756 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:55.385777 kubelet[2672]: W0910 23:22:55.385773 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:55.385860 kubelet[2672]: E0910 23:22:55.385791 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:56.064121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424507003.mount: Deactivated successfully. Sep 10 23:22:56.877880 kubelet[2672]: E0910 23:22:56.877836 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59n6n" podUID="56b76e01-e81c-4847-8f88-e9e155779575" Sep 10 23:22:57.340948 containerd[1524]: time="2025-09-10T23:22:57.340872433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:57.341706 containerd[1524]: time="2025-09-10T23:22:57.341681231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:22:57.342354 containerd[1524]: time="2025-09-10T23:22:57.342326471Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:57.344593 containerd[1524]: time="2025-09-10T23:22:57.344394388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:57.345083 containerd[1524]: time="2025-09-10T23:22:57.345030427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.303806219s" Sep 10 23:22:57.345083 containerd[1524]: time="2025-09-10T23:22:57.345078067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:22:57.345972 containerd[1524]: time="2025-09-10T23:22:57.345939066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:22:57.360742 containerd[1524]: time="2025-09-10T23:22:57.360704205Z" level=info msg="CreateContainer within sandbox \"db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:22:57.367282 containerd[1524]: time="2025-09-10T23:22:57.367102956Z" level=info msg="Container f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:57.376986 containerd[1524]: time="2025-09-10T23:22:57.376927943Z" level=info msg="CreateContainer within sandbox \"db67bdc1214b5101d38f49be7cf3ae51763ab9d0b717931dccd2b1bdc06ae4c9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff\"" Sep 10 23:22:57.378563 containerd[1524]: time="2025-09-10T23:22:57.378534701Z" level=info msg="StartContainer for \"f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff\"" Sep 10 23:22:57.380050 containerd[1524]: time="2025-09-10T23:22:57.379744499Z" level=info msg="connecting to shim f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff" address="unix:///run/containerd/s/06c71d51d69fa5d278e9f28b918b22d49140fd4f5d5a1814da932d1ff4c2a64f" protocol=ttrpc version=3 Sep 10 23:22:57.411700 systemd[1]: Started cri-containerd-f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff.scope - libcontainer container f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff. Sep 10 23:22:57.451549 containerd[1524]: time="2025-09-10T23:22:57.451513800Z" level=info msg="StartContainer for \"f7a7f12b31f5e73b778b615e0a09d8fade02fedbc6375ca7f0b036d878dafbff\" returns successfully" Sep 10 23:22:57.951014 kubelet[2672]: E0910 23:22:57.950983 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:57.967243 kubelet[2672]: I0910 23:22:57.967158 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75b97687b8-wgb28" podStartSLOduration=1.661212871 podStartE2EDuration="3.967133607s" podCreationTimestamp="2025-09-10 23:22:54 +0000 UTC" firstStartedPulling="2025-09-10 23:22:55.03984669 +0000 UTC m=+17.260800393" lastFinishedPulling="2025-09-10 23:22:57.345767426 +0000 UTC m=+19.566721129" observedRunningTime="2025-09-10 23:22:57.966392128 +0000 UTC m=+20.187345831" watchObservedRunningTime="2025-09-10 23:22:57.967133607 +0000 UTC m=+20.188087350" Sep 10 23:22:57.992733 kubelet[2672]: E0910 23:22:57.992687 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.992733 kubelet[2672]: W0910 23:22:57.992708 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.992733 kubelet[2672]: E0910 23:22:57.992735 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993074 kubelet[2672]: E0910 23:22:57.992888 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993074 kubelet[2672]: W0910 23:22:57.992896 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993074 kubelet[2672]: E0910 23:22:57.992904 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993074 kubelet[2672]: E0910 23:22:57.993071 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993158 kubelet[2672]: W0910 23:22:57.993079 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993158 kubelet[2672]: E0910 23:22:57.993086 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993224 kubelet[2672]: E0910 23:22:57.993198 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993224 kubelet[2672]: W0910 23:22:57.993208 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993308 kubelet[2672]: E0910 23:22:57.993225 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993430 kubelet[2672]: E0910 23:22:57.993415 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993464 kubelet[2672]: W0910 23:22:57.993437 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993464 kubelet[2672]: E0910 23:22:57.993447 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993589 kubelet[2672]: E0910 23:22:57.993577 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993621 kubelet[2672]: W0910 23:22:57.993590 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993621 kubelet[2672]: E0910 23:22:57.993604 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.993822 kubelet[2672]: E0910 23:22:57.993809 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.993822 kubelet[2672]: W0910 23:22:57.993821 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.993893 kubelet[2672]: E0910 23:22:57.993830 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.994005 kubelet[2672]: E0910 23:22:57.993992 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.994033 kubelet[2672]: W0910 23:22:57.994004 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.994033 kubelet[2672]: E0910 23:22:57.994013 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.994562 kubelet[2672]: E0910 23:22:57.994548 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.994562 kubelet[2672]: W0910 23:22:57.994562 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.994634 kubelet[2672]: E0910 23:22:57.994574 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.994728 kubelet[2672]: E0910 23:22:57.994714 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.994728 kubelet[2672]: W0910 23:22:57.994726 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.994790 kubelet[2672]: E0910 23:22:57.994736 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.994857 kubelet[2672]: E0910 23:22:57.994845 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.994857 kubelet[2672]: W0910 23:22:57.994856 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.994913 kubelet[2672]: E0910 23:22:57.994863 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.994978 kubelet[2672]: E0910 23:22:57.994967 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.994978 kubelet[2672]: W0910 23:22:57.994977 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.995041 kubelet[2672]: E0910 23:22:57.994984 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.995107 kubelet[2672]: E0910 23:22:57.995096 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.995107 kubelet[2672]: W0910 23:22:57.995106 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.995151 kubelet[2672]: E0910 23:22:57.995113 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.995243 kubelet[2672]: E0910 23:22:57.995230 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.995243 kubelet[2672]: W0910 23:22:57.995241 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.995329 kubelet[2672]: E0910 23:22:57.995248 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:57.999538 kubelet[2672]: E0910 23:22:57.999337 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:57.999538 kubelet[2672]: W0910 23:22:57.999355 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:57.999538 kubelet[2672]: E0910 23:22:57.999367 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.091989 kubelet[2672]: E0910 23:22:58.091957 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.091989 kubelet[2672]: W0910 23:22:58.091980 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.092145 kubelet[2672]: E0910 23:22:58.092014 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.092228 kubelet[2672]: E0910 23:22:58.092204 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.092267 kubelet[2672]: W0910 23:22:58.092230 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.092267 kubelet[2672]: E0910 23:22:58.092241 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.092424 kubelet[2672]: E0910 23:22:58.092411 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.092424 kubelet[2672]: W0910 23:22:58.092423 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.092525 kubelet[2672]: E0910 23:22:58.092447 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.092695 kubelet[2672]: E0910 23:22:58.092681 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.092695 kubelet[2672]: W0910 23:22:58.092692 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.092768 kubelet[2672]: E0910 23:22:58.092705 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.093379 kubelet[2672]: E0910 23:22:58.093345 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.093551 kubelet[2672]: W0910 23:22:58.093533 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.093580 kubelet[2672]: E0910 23:22:58.093560 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.094629 kubelet[2672]: E0910 23:22:58.094590 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.094629 kubelet[2672]: W0910 23:22:58.094607 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.094706 kubelet[2672]: E0910 23:22:58.094661 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.095123 kubelet[2672]: E0910 23:22:58.094910 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.095123 kubelet[2672]: W0910 23:22:58.094924 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.095123 kubelet[2672]: E0910 23:22:58.094963 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.095123 kubelet[2672]: E0910 23:22:58.095090 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.095123 kubelet[2672]: W0910 23:22:58.095098 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.095300 kubelet[2672]: E0910 23:22:58.095174 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095334 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096120 kubelet[2672]: W0910 23:22:58.095347 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095357 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095689 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096120 kubelet[2672]: W0910 23:22:58.095698 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095708 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095890 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096120 kubelet[2672]: W0910 23:22:58.095912 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.095923 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096120 kubelet[2672]: E0910 23:22:58.096084 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096438 kubelet[2672]: W0910 23:22:58.096092 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096438 kubelet[2672]: E0910 23:22:58.096100 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096438 kubelet[2672]: E0910 23:22:58.096310 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096438 kubelet[2672]: W0910 23:22:58.096320 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096438 kubelet[2672]: E0910 23:22:58.096335 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096805 kubelet[2672]: E0910 23:22:58.096559 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096805 kubelet[2672]: W0910 23:22:58.096577 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096805 kubelet[2672]: E0910 23:22:58.096591 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.096805 kubelet[2672]: E0910 23:22:58.096728 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.096805 kubelet[2672]: W0910 23:22:58.096735 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.096805 kubelet[2672]: E0910 23:22:58.096748 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.097080 kubelet[2672]: E0910 23:22:58.096925 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.097080 kubelet[2672]: W0910 23:22:58.096933 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.097080 kubelet[2672]: E0910 23:22:58.096945 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.097453 kubelet[2672]: E0910 23:22:58.097436 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.098463 kubelet[2672]: W0910 23:22:58.098296 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.098463 kubelet[2672]: E0910 23:22:58.098331 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.098592 kubelet[2672]: E0910 23:22:58.098579 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:22:58.099667 kubelet[2672]: W0910 23:22:58.099637 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:22:58.100370 kubelet[2672]: E0910 23:22:58.100344 2672 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:22:58.441994 containerd[1524]: time="2025-09-10T23:22:58.441624748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:58.442431 containerd[1524]: time="2025-09-10T23:22:58.442193468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:22:58.443182 containerd[1524]: time="2025-09-10T23:22:58.443141067Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:58.445101 containerd[1524]: time="2025-09-10T23:22:58.445046464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:58.445826 containerd[1524]: time="2025-09-10T23:22:58.445786383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.099805717s" Sep 10 23:22:58.445896 containerd[1524]: time="2025-09-10T23:22:58.445827863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:22:58.451309 containerd[1524]: time="2025-09-10T23:22:58.450938896Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:22:58.462194 containerd[1524]: time="2025-09-10T23:22:58.462149562Z" level=info msg="Container e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:58.465894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285231747.mount: Deactivated successfully. Sep 10 23:22:58.478847 containerd[1524]: time="2025-09-10T23:22:58.478796100Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\"" Sep 10 23:22:58.479680 containerd[1524]: time="2025-09-10T23:22:58.479639099Z" level=info msg="StartContainer for \"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\"" Sep 10 23:22:58.481826 containerd[1524]: time="2025-09-10T23:22:58.481781216Z" level=info msg="connecting to shim e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9" address="unix:///run/containerd/s/f8a99070bc5e9e050c68879def663a22b5934f049e7e11a62cdd955e366578ad" protocol=ttrpc version=3 Sep 10 23:22:58.506420 systemd[1]: Started cri-containerd-e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9.scope - libcontainer container e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9. Sep 10 23:22:58.540314 containerd[1524]: time="2025-09-10T23:22:58.540274141Z" level=info msg="StartContainer for \"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\" returns successfully" Sep 10 23:22:58.554531 systemd[1]: cri-containerd-e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9.scope: Deactivated successfully. Sep 10 23:22:58.555345 systemd[1]: cri-containerd-e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9.scope: Consumed 29ms CPU time, 6.1M memory peak, 4.5M written to disk. Sep 10 23:22:58.579809 containerd[1524]: time="2025-09-10T23:22:58.579752529Z" level=info msg="received exit event container_id:\"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\" id:\"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\" pid:3340 exited_at:{seconds:1757546578 nanos:574314496}" Sep 10 23:22:58.579985 containerd[1524]: time="2025-09-10T23:22:58.579852849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\" id:\"e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9\" pid:3340 exited_at:{seconds:1757546578 nanos:574314496}" Sep 10 23:22:58.616574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2e7af12b03ed4557a494c25f68c489b275fa6cb6ef77d85dfd579a1aeaa23c9-rootfs.mount: Deactivated successfully. Sep 10 23:22:58.877638 kubelet[2672]: E0910 23:22:58.877578 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59n6n" podUID="56b76e01-e81c-4847-8f88-e9e155779575" Sep 10 23:22:58.954972 kubelet[2672]: I0910 23:22:58.954248 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:22:58.954972 kubelet[2672]: E0910 23:22:58.954614 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:22:58.956113 containerd[1524]: time="2025-09-10T23:22:58.955355082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:23:00.877699 kubelet[2672]: E0910 23:23:00.877336 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59n6n" podUID="56b76e01-e81c-4847-8f88-e9e155779575" Sep 10 23:23:01.638914 containerd[1524]: time="2025-09-10T23:23:01.638866347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:01.639538 containerd[1524]: time="2025-09-10T23:23:01.639505546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:23:01.642966 containerd[1524]: time="2025-09-10T23:23:01.642889822Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:01.644832 containerd[1524]: time="2025-09-10T23:23:01.644772900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:01.646040 containerd[1524]: time="2025-09-10T23:23:01.645924019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.690534537s" Sep 10 23:23:01.646040 containerd[1524]: time="2025-09-10T23:23:01.645960979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:23:01.650487 containerd[1524]: time="2025-09-10T23:23:01.650449734Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:23:01.668950 containerd[1524]: time="2025-09-10T23:23:01.668898235Z" level=info msg="Container cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:01.676332 containerd[1524]: time="2025-09-10T23:23:01.676290867Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\"" Sep 10 23:23:01.676969 containerd[1524]: time="2025-09-10T23:23:01.676930546Z" level=info msg="StartContainer for \"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\"" Sep 10 23:23:01.678510 containerd[1524]: time="2025-09-10T23:23:01.678481744Z" level=info msg="connecting to shim cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23" address="unix:///run/containerd/s/f8a99070bc5e9e050c68879def663a22b5934f049e7e11a62cdd955e366578ad" protocol=ttrpc version=3 Sep 10 23:23:01.714480 systemd[1]: Started cri-containerd-cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23.scope - libcontainer container cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23. Sep 10 23:23:01.751644 containerd[1524]: time="2025-09-10T23:23:01.751594186Z" level=info msg="StartContainer for \"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\" returns successfully" Sep 10 23:23:02.297230 systemd[1]: cri-containerd-cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23.scope: Deactivated successfully. Sep 10 23:23:02.298230 containerd[1524]: time="2025-09-10T23:23:02.298189142Z" level=info msg="received exit event container_id:\"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\" id:\"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\" pid:3398 exited_at:{seconds:1757546582 nanos:297978623}" Sep 10 23:23:02.298226 systemd[1]: cri-containerd-cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23.scope: Consumed 485ms CPU time, 175.1M memory peak, 2.7M read from disk, 165.8M written to disk. Sep 10 23:23:02.298470 containerd[1524]: time="2025-09-10T23:23:02.298445582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\" id:\"cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23\" pid:3398 exited_at:{seconds:1757546582 nanos:297978623}" Sep 10 23:23:02.316498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb18ef95f5bf9adba9572ea2449a028878211323cbffae4ece1d8174c02a6d23-rootfs.mount: Deactivated successfully. Sep 10 23:23:02.317688 kubelet[2672]: I0910 23:23:02.317663 2672 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 23:23:02.398498 systemd[1]: Created slice kubepods-burstable-podeb59fb67_2714_487d_8245_dc796ba02d18.slice - libcontainer container kubepods-burstable-podeb59fb67_2714_487d_8245_dc796ba02d18.slice. Sep 10 23:23:02.406374 systemd[1]: Created slice kubepods-burstable-podb70e0103_15b0_42d4_8bb9_9869a6a405c8.slice - libcontainer container kubepods-burstable-podb70e0103_15b0_42d4_8bb9_9869a6a405c8.slice. Sep 10 23:23:02.422109 systemd[1]: Created slice kubepods-besteffort-pod59db97de_68b3_4a73_8c92_92bec877428c.slice - libcontainer container kubepods-besteffort-pod59db97de_68b3_4a73_8c92_92bec877428c.slice. Sep 10 23:23:02.427444 systemd[1]: Created slice kubepods-besteffort-pode1038467_e177_4f29_8af7_0857c8035031.slice - libcontainer container kubepods-besteffort-pode1038467_e177_4f29_8af7_0857c8035031.slice. Sep 10 23:23:02.431740 systemd[1]: Created slice kubepods-besteffort-pod33898bb5_8476_49e9_ae84_9e80452648c1.slice - libcontainer container kubepods-besteffort-pod33898bb5_8476_49e9_ae84_9e80452648c1.slice. Sep 10 23:23:02.435471 systemd[1]: Created slice kubepods-besteffort-pod81cb8fd7_cecd_405f_9a31_d6d2993fb447.slice - libcontainer container kubepods-besteffort-pod81cb8fd7_cecd_405f_9a31_d6d2993fb447.slice. Sep 10 23:23:02.439997 systemd[1]: Created slice kubepods-besteffort-podc1a2b944_2e85_4183_b76b_a8410e249012.slice - libcontainer container kubepods-besteffort-podc1a2b944_2e85_4183_b76b_a8410e249012.slice. Sep 10 23:23:02.442754 systemd[1]: Created slice kubepods-besteffort-pod0bb26eb8_c7e6_4e42_9e76_36364255597b.slice - libcontainer container kubepods-besteffort-pod0bb26eb8_c7e6_4e42_9e76_36364255597b.slice. Sep 10 23:23:02.524525 kubelet[2672]: I0910 23:23:02.524477 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gt8\" (UniqueName: \"kubernetes.io/projected/0bb26eb8-c7e6-4e42-9e76-36364255597b-kube-api-access-t5gt8\") pod \"calico-apiserver-9c84896fc-q6x8n\" (UID: \"0bb26eb8-c7e6-4e42-9e76-36364255597b\") " pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" Sep 10 23:23:02.524862 kubelet[2672]: I0910 23:23:02.524761 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81cb8fd7-cecd-405f-9a31-d6d2993fb447-goldmane-ca-bundle\") pod \"goldmane-7988f88666-6vtm5\" (UID: \"81cb8fd7-cecd-405f-9a31-d6d2993fb447\") " pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.524862 kubelet[2672]: I0910 23:23:02.524807 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/81cb8fd7-cecd-405f-9a31-d6d2993fb447-goldmane-key-pair\") pod \"goldmane-7988f88666-6vtm5\" (UID: \"81cb8fd7-cecd-405f-9a31-d6d2993fb447\") " pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.524862 kubelet[2672]: I0910 23:23:02.524830 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/59db97de-68b3-4a73-8c92-92bec877428c-calico-apiserver-certs\") pod \"calico-apiserver-9c84896fc-95c8s\" (UID: \"59db97de-68b3-4a73-8c92-92bec877428c\") " pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" Sep 10 23:23:02.525173 kubelet[2672]: I0910 23:23:02.524848 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bb26eb8-c7e6-4e42-9e76-36364255597b-calico-apiserver-certs\") pod \"calico-apiserver-9c84896fc-q6x8n\" (UID: \"0bb26eb8-c7e6-4e42-9e76-36364255597b\") " pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" Sep 10 23:23:02.525173 kubelet[2672]: I0910 23:23:02.525017 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cwg\" (UniqueName: \"kubernetes.io/projected/81cb8fd7-cecd-405f-9a31-d6d2993fb447-kube-api-access-t7cwg\") pod \"goldmane-7988f88666-6vtm5\" (UID: \"81cb8fd7-cecd-405f-9a31-d6d2993fb447\") " pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.525173 kubelet[2672]: I0910 23:23:02.525078 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-backend-key-pair\") pod \"whisker-6bf67f8579-sm5hx\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " pod="calico-system/whisker-6bf67f8579-sm5hx" Sep 10 23:23:02.525173 kubelet[2672]: I0910 23:23:02.525101 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-ca-bundle\") pod \"whisker-6bf67f8579-sm5hx\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " pod="calico-system/whisker-6bf67f8579-sm5hx" Sep 10 23:23:02.525173 kubelet[2672]: I0910 23:23:02.525119 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgsn\" (UniqueName: \"kubernetes.io/projected/c1a2b944-2e85-4183-b76b-a8410e249012-kube-api-access-ktgsn\") pod \"whisker-6bf67f8579-sm5hx\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " pod="calico-system/whisker-6bf67f8579-sm5hx" Sep 10 23:23:02.525339 kubelet[2672]: I0910 23:23:02.525138 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjz75\" (UniqueName: \"kubernetes.io/projected/59db97de-68b3-4a73-8c92-92bec877428c-kube-api-access-cjz75\") pod \"calico-apiserver-9c84896fc-95c8s\" (UID: \"59db97de-68b3-4a73-8c92-92bec877428c\") " pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" Sep 10 23:23:02.525339 kubelet[2672]: I0910 23:23:02.525156 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1038467-e177-4f29-8af7-0857c8035031-tigera-ca-bundle\") pod \"calico-kube-controllers-64c6b9c664-sf4mz\" (UID: \"e1038467-e177-4f29-8af7-0857c8035031\") " pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" Sep 10 23:23:02.525432 kubelet[2672]: I0910 23:23:02.525414 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b70e0103-15b0-42d4-8bb9-9869a6a405c8-config-volume\") pod \"coredns-7c65d6cfc9-zxtpf\" (UID: \"b70e0103-15b0-42d4-8bb9-9869a6a405c8\") " pod="kube-system/coredns-7c65d6cfc9-zxtpf" Sep 10 23:23:02.525535 kubelet[2672]: I0910 23:23:02.525513 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6bfd\" (UniqueName: \"kubernetes.io/projected/e1038467-e177-4f29-8af7-0857c8035031-kube-api-access-g6bfd\") pod \"calico-kube-controllers-64c6b9c664-sf4mz\" (UID: \"e1038467-e177-4f29-8af7-0857c8035031\") " pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" Sep 10 23:23:02.525615 kubelet[2672]: I0910 23:23:02.525604 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vsdw\" (UniqueName: \"kubernetes.io/projected/b70e0103-15b0-42d4-8bb9-9869a6a405c8-kube-api-access-5vsdw\") pod \"coredns-7c65d6cfc9-zxtpf\" (UID: \"b70e0103-15b0-42d4-8bb9-9869a6a405c8\") " pod="kube-system/coredns-7c65d6cfc9-zxtpf" Sep 10 23:23:02.525716 kubelet[2672]: I0910 23:23:02.525705 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33898bb5-8476-49e9-ae84-9e80452648c1-calico-apiserver-certs\") pod \"calico-apiserver-c6df754fc-c7k7k\" (UID: \"33898bb5-8476-49e9-ae84-9e80452648c1\") " pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" Sep 10 23:23:02.525819 kubelet[2672]: I0910 23:23:02.525809 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbc4s\" (UniqueName: \"kubernetes.io/projected/eb59fb67-2714-487d-8245-dc796ba02d18-kube-api-access-wbc4s\") pod \"coredns-7c65d6cfc9-w4tx7\" (UID: \"eb59fb67-2714-487d-8245-dc796ba02d18\") " pod="kube-system/coredns-7c65d6cfc9-w4tx7" Sep 10 23:23:02.525957 kubelet[2672]: I0910 23:23:02.525902 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb59fb67-2714-487d-8245-dc796ba02d18-config-volume\") pod \"coredns-7c65d6cfc9-w4tx7\" (UID: \"eb59fb67-2714-487d-8245-dc796ba02d18\") " pod="kube-system/coredns-7c65d6cfc9-w4tx7" Sep 10 23:23:02.525957 kubelet[2672]: I0910 23:23:02.525922 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81cb8fd7-cecd-405f-9a31-d6d2993fb447-config\") pod \"goldmane-7988f88666-6vtm5\" (UID: \"81cb8fd7-cecd-405f-9a31-d6d2993fb447\") " pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.526092 kubelet[2672]: I0910 23:23:02.525945 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m692j\" (UniqueName: \"kubernetes.io/projected/33898bb5-8476-49e9-ae84-9e80452648c1-kube-api-access-m692j\") pod \"calico-apiserver-c6df754fc-c7k7k\" (UID: \"33898bb5-8476-49e9-ae84-9e80452648c1\") " pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" Sep 10 23:23:02.704833 kubelet[2672]: E0910 23:23:02.704691 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:02.705662 containerd[1524]: time="2025-09-10T23:23:02.705633134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w4tx7,Uid:eb59fb67-2714-487d-8245-dc796ba02d18,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:02.718362 kubelet[2672]: E0910 23:23:02.717000 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:02.718493 containerd[1524]: time="2025-09-10T23:23:02.717711042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxtpf,Uid:b70e0103-15b0-42d4-8bb9-9869a6a405c8,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:02.726764 containerd[1524]: time="2025-09-10T23:23:02.726472673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-95c8s,Uid:59db97de-68b3-4a73-8c92-92bec877428c,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:02.732725 containerd[1524]: time="2025-09-10T23:23:02.732671707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c6b9c664-sf4mz,Uid:e1038467-e177-4f29-8af7-0857c8035031,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:02.736756 containerd[1524]: time="2025-09-10T23:23:02.736694743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6df754fc-c7k7k,Uid:33898bb5-8476-49e9-ae84-9e80452648c1,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:02.744551 containerd[1524]: time="2025-09-10T23:23:02.744512895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6vtm5,Uid:81cb8fd7-cecd-405f-9a31-d6d2993fb447,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:02.747423 containerd[1524]: time="2025-09-10T23:23:02.744831295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf67f8579-sm5hx,Uid:c1a2b944-2e85-4183-b76b-a8410e249012,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:02.747657 containerd[1524]: time="2025-09-10T23:23:02.744953135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-q6x8n,Uid:0bb26eb8-c7e6-4e42-9e76-36364255597b,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:02.839100 containerd[1524]: time="2025-09-10T23:23:02.839043521Z" level=error msg="Failed to destroy network for sandbox \"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.840905 containerd[1524]: time="2025-09-10T23:23:02.840790959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-95c8s,Uid:59db97de-68b3-4a73-8c92-92bec877428c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.843497 kubelet[2672]: E0910 23:23:02.843436 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.846329 containerd[1524]: time="2025-09-10T23:23:02.846242033Z" level=error msg="Failed to destroy network for sandbox \"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.847332 kubelet[2672]: E0910 23:23:02.847255 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" Sep 10 23:23:02.850066 containerd[1524]: time="2025-09-10T23:23:02.849951790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxtpf,Uid:b70e0103-15b0-42d4-8bb9-9869a6a405c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.850725 kubelet[2672]: E0910 23:23:02.850678 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.850811 kubelet[2672]: E0910 23:23:02.850742 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zxtpf" Sep 10 23:23:02.850811 kubelet[2672]: E0910 23:23:02.850783 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zxtpf" Sep 10 23:23:02.850974 kubelet[2672]: E0910 23:23:02.850838 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zxtpf_kube-system(b70e0103-15b0-42d4-8bb9-9869a6a405c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zxtpf_kube-system(b70e0103-15b0-42d4-8bb9-9869a6a405c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96e9431b88ad1c386d9c01f640f5067faf50c5ad29efb561befd3ae14b6347b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zxtpf" podUID="b70e0103-15b0-42d4-8bb9-9869a6a405c8" Sep 10 23:23:02.855508 kubelet[2672]: E0910 23:23:02.855448 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" Sep 10 23:23:02.855624 kubelet[2672]: E0910 23:23:02.855551 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c84896fc-95c8s_calico-apiserver(59db97de-68b3-4a73-8c92-92bec877428c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c84896fc-95c8s_calico-apiserver(59db97de-68b3-4a73-8c92-92bec877428c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ec92b90c7e994b1ad8954afca177a88dd2123f790de41b6d21c98bb3520c8d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" podUID="59db97de-68b3-4a73-8c92-92bec877428c" Sep 10 23:23:02.873438 containerd[1524]: time="2025-09-10T23:23:02.873384286Z" level=error msg="Failed to destroy network for sandbox \"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.873550 containerd[1524]: time="2025-09-10T23:23:02.873399686Z" level=error msg="Failed to destroy network for sandbox \"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.873742 containerd[1524]: time="2025-09-10T23:23:02.873714646Z" level=error msg="Failed to destroy network for sandbox \"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.874037 containerd[1524]: time="2025-09-10T23:23:02.873972366Z" level=error msg="Failed to destroy network for sandbox \"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.874421 containerd[1524]: time="2025-09-10T23:23:02.874385725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-q6x8n,Uid:0bb26eb8-c7e6-4e42-9e76-36364255597b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.874684 kubelet[2672]: E0910 23:23:02.874618 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.874783 kubelet[2672]: E0910 23:23:02.874712 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" Sep 10 23:23:02.874783 kubelet[2672]: E0910 23:23:02.874734 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" Sep 10 23:23:02.874865 kubelet[2672]: E0910 23:23:02.874776 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9c84896fc-q6x8n_calico-apiserver(0bb26eb8-c7e6-4e42-9e76-36364255597b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9c84896fc-q6x8n_calico-apiserver(0bb26eb8-c7e6-4e42-9e76-36364255597b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3d5c4a08e0262976aafed8be13626921400dc0183e3bcab2f5cb1355a56fe5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" podUID="0bb26eb8-c7e6-4e42-9e76-36364255597b" Sep 10 23:23:02.875364 containerd[1524]: time="2025-09-10T23:23:02.875325404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w4tx7,Uid:eb59fb67-2714-487d-8245-dc796ba02d18,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.875501 kubelet[2672]: E0910 23:23:02.875476 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.875559 kubelet[2672]: E0910 23:23:02.875512 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w4tx7" Sep 10 23:23:02.875559 kubelet[2672]: E0910 23:23:02.875527 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w4tx7" Sep 10 23:23:02.875624 kubelet[2672]: E0910 23:23:02.875572 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w4tx7_kube-system(eb59fb67-2714-487d-8245-dc796ba02d18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w4tx7_kube-system(eb59fb67-2714-487d-8245-dc796ba02d18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec68a4d448e9e59023e2dbea3ccb7cf5f5232b6053533af2167d369f4307d719\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w4tx7" podUID="eb59fb67-2714-487d-8245-dc796ba02d18" Sep 10 23:23:02.876696 containerd[1524]: time="2025-09-10T23:23:02.876377123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c6b9c664-sf4mz,Uid:e1038467-e177-4f29-8af7-0857c8035031,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.877348 containerd[1524]: time="2025-09-10T23:23:02.877311202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6vtm5,Uid:81cb8fd7-cecd-405f-9a31-d6d2993fb447,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.878125 kubelet[2672]: E0910 23:23:02.877827 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.878125 kubelet[2672]: E0910 23:23:02.877835 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.878125 kubelet[2672]: E0910 23:23:02.877899 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" Sep 10 23:23:02.878125 kubelet[2672]: E0910 23:23:02.877916 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" Sep 10 23:23:02.878313 kubelet[2672]: E0910 23:23:02.877866 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.878313 kubelet[2672]: E0910 23:23:02.877947 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64c6b9c664-sf4mz_calico-system(e1038467-e177-4f29-8af7-0857c8035031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64c6b9c664-sf4mz_calico-system(e1038467-e177-4f29-8af7-0857c8035031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65b3c9d3f5ff03e4265bbbb5a0ff8b135202ef056c28f114ac714d9b8a5a5b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" podUID="e1038467-e177-4f29-8af7-0857c8035031" Sep 10 23:23:02.878313 kubelet[2672]: E0910 23:23:02.877956 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-6vtm5" Sep 10 23:23:02.878402 kubelet[2672]: E0910 23:23:02.878083 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-6vtm5_calico-system(81cb8fd7-cecd-405f-9a31-d6d2993fb447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-6vtm5_calico-system(81cb8fd7-cecd-405f-9a31-d6d2993fb447)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17e87441ee3c44359f2ba9aca0263d990a3910bf9d2f68d5df2090fd1bf8cbb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-6vtm5" podUID="81cb8fd7-cecd-405f-9a31-d6d2993fb447" Sep 10 23:23:02.888791 systemd[1]: Created slice kubepods-besteffort-pod56b76e01_e81c_4847_8f88_e9e155779575.slice - libcontainer container kubepods-besteffort-pod56b76e01_e81c_4847_8f88_e9e155779575.slice. Sep 10 23:23:02.892756 containerd[1524]: time="2025-09-10T23:23:02.892721067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59n6n,Uid:56b76e01-e81c-4847-8f88-e9e155779575,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:02.893874 containerd[1524]: time="2025-09-10T23:23:02.893807706Z" level=error msg="Failed to destroy network for sandbox \"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.895632 containerd[1524]: time="2025-09-10T23:23:02.895479744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bf67f8579-sm5hx,Uid:c1a2b944-2e85-4183-b76b-a8410e249012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.895828 kubelet[2672]: E0910 23:23:02.895790 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.895871 kubelet[2672]: E0910 23:23:02.895848 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf67f8579-sm5hx" Sep 10 23:23:02.895894 kubelet[2672]: E0910 23:23:02.895867 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bf67f8579-sm5hx" Sep 10 23:23:02.895929 kubelet[2672]: E0910 23:23:02.895901 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bf67f8579-sm5hx_calico-system(c1a2b944-2e85-4183-b76b-a8410e249012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bf67f8579-sm5hx_calico-system(c1a2b944-2e85-4183-b76b-a8410e249012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5291903b46a8ef123ad2e431d3390b36525af6c2c4f51a094c508c66f4b0471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bf67f8579-sm5hx" podUID="c1a2b944-2e85-4183-b76b-a8410e249012" Sep 10 23:23:02.900392 containerd[1524]: time="2025-09-10T23:23:02.900350939Z" level=error msg="Failed to destroy network for sandbox \"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.901646 containerd[1524]: time="2025-09-10T23:23:02.901515258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6df754fc-c7k7k,Uid:33898bb5-8476-49e9-ae84-9e80452648c1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.901962 kubelet[2672]: E0910 23:23:02.901924 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.902003 kubelet[2672]: E0910 23:23:02.901982 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" Sep 10 23:23:02.902045 kubelet[2672]: E0910 23:23:02.901999 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" Sep 10 23:23:02.902068 kubelet[2672]: E0910 23:23:02.902047 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6df754fc-c7k7k_calico-apiserver(33898bb5-8476-49e9-ae84-9e80452648c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6df754fc-c7k7k_calico-apiserver(33898bb5-8476-49e9-ae84-9e80452648c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7f3780fe00af58446a59bf58a9090eb48c79bd54ae33097326c8f5db9489492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" podUID="33898bb5-8476-49e9-ae84-9e80452648c1" Sep 10 23:23:02.940298 containerd[1524]: time="2025-09-10T23:23:02.940220859Z" level=error msg="Failed to destroy network for sandbox \"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.941408 containerd[1524]: time="2025-09-10T23:23:02.941235418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59n6n,Uid:56b76e01-e81c-4847-8f88-e9e155779575,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.941803 kubelet[2672]: E0910 23:23:02.941762 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:02.941875 kubelet[2672]: E0910 23:23:02.941832 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59n6n" Sep 10 23:23:02.941875 kubelet[2672]: E0910 23:23:02.941861 2672 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59n6n" Sep 10 23:23:02.941933 kubelet[2672]: E0910 23:23:02.941907 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59n6n_calico-system(56b76e01-e81c-4847-8f88-e9e155779575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59n6n_calico-system(56b76e01-e81c-4847-8f88-e9e155779575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"054571fc6af0b15f7b1a3d508e1a809b4dc96c0e7845435fee6b5bed9373632b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59n6n" podUID="56b76e01-e81c-4847-8f88-e9e155779575" Sep 10 23:23:02.973396 containerd[1524]: time="2025-09-10T23:23:02.973140826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:23:03.668615 systemd[1]: run-netns-cni\x2d0633f6b3\x2d0fed\x2d98b6\x2d0d46\x2dd997cc093bf7.mount: Deactivated successfully. Sep 10 23:23:03.668712 systemd[1]: run-netns-cni\x2de91f48b8\x2df322\x2d308c\x2d8ab8\x2d3aeef453283b.mount: Deactivated successfully. Sep 10 23:23:03.668793 systemd[1]: run-netns-cni\x2d9d41dce4\x2d58ef\x2da805\x2d9ecf\x2dfa934703e3bf.mount: Deactivated successfully. Sep 10 23:23:03.668839 systemd[1]: run-netns-cni\x2d15dc3d54\x2d5984\x2d0b2e\x2db8fd\x2d36c8b611664f.mount: Deactivated successfully. Sep 10 23:23:06.326222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891046966.mount: Deactivated successfully. Sep 10 23:23:06.643324 containerd[1524]: time="2025-09-10T23:23:06.643150777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:23:06.651190 containerd[1524]: time="2025-09-10T23:23:06.651122131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.663660 containerd[1524]: time="2025-09-10T23:23:06.663587481Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.664419 containerd[1524]: time="2025-09-10T23:23:06.664178961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.690997615s" Sep 10 23:23:06.664419 containerd[1524]: time="2025-09-10T23:23:06.664221761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:23:06.664743 containerd[1524]: time="2025-09-10T23:23:06.664715001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.674168 containerd[1524]: time="2025-09-10T23:23:06.674049633Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:23:06.711288 containerd[1524]: time="2025-09-10T23:23:06.710744685Z" level=info msg="Container a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:06.719691 containerd[1524]: time="2025-09-10T23:23:06.719626758Z" level=info msg="CreateContainer within sandbox \"35f645b4653e1e1c021848f12bd3900b1f2ec25d81644928137aa49893aa0030\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\"" Sep 10 23:23:06.720338 containerd[1524]: time="2025-09-10T23:23:06.720174598Z" level=info msg="StartContainer for \"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\"" Sep 10 23:23:06.721852 containerd[1524]: time="2025-09-10T23:23:06.721809076Z" level=info msg="connecting to shim a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d" address="unix:///run/containerd/s/f8a99070bc5e9e050c68879def663a22b5934f049e7e11a62cdd955e366578ad" protocol=ttrpc version=3 Sep 10 23:23:06.766448 systemd[1]: Started cri-containerd-a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d.scope - libcontainer container a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d. Sep 10 23:23:06.811863 containerd[1524]: time="2025-09-10T23:23:06.811747887Z" level=info msg="StartContainer for \"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\" returns successfully" Sep 10 23:23:06.924361 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:23:06.924458 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:23:07.011713 kubelet[2672]: I0910 23:23:07.011614 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-slwqs" podStartSLOduration=1.717616501 podStartE2EDuration="13.011595893s" podCreationTimestamp="2025-09-10 23:22:54 +0000 UTC" firstStartedPulling="2025-09-10 23:22:55.372248527 +0000 UTC m=+17.593202190" lastFinishedPulling="2025-09-10 23:23:06.666227879 +0000 UTC m=+28.887181582" observedRunningTime="2025-09-10 23:23:07.010481534 +0000 UTC m=+29.231435197" watchObservedRunningTime="2025-09-10 23:23:07.011595893 +0000 UTC m=+29.232549556" Sep 10 23:23:07.159097 kubelet[2672]: I0910 23:23:07.159053 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktgsn\" (UniqueName: \"kubernetes.io/projected/c1a2b944-2e85-4183-b76b-a8410e249012-kube-api-access-ktgsn\") pod \"c1a2b944-2e85-4183-b76b-a8410e249012\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " Sep 10 23:23:07.159097 kubelet[2672]: I0910 23:23:07.159102 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-ca-bundle\") pod \"c1a2b944-2e85-4183-b76b-a8410e249012\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " Sep 10 23:23:07.159885 kubelet[2672]: I0910 23:23:07.159126 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-backend-key-pair\") pod \"c1a2b944-2e85-4183-b76b-a8410e249012\" (UID: \"c1a2b944-2e85-4183-b76b-a8410e249012\") " Sep 10 23:23:07.163296 kubelet[2672]: I0910 23:23:07.163099 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c1a2b944-2e85-4183-b76b-a8410e249012" (UID: "c1a2b944-2e85-4183-b76b-a8410e249012"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 23:23:07.167793 kubelet[2672]: I0910 23:23:07.167719 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a2b944-2e85-4183-b76b-a8410e249012-kube-api-access-ktgsn" (OuterVolumeSpecName: "kube-api-access-ktgsn") pod "c1a2b944-2e85-4183-b76b-a8410e249012" (UID: "c1a2b944-2e85-4183-b76b-a8410e249012"). InnerVolumeSpecName "kube-api-access-ktgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 23:23:07.174337 kubelet[2672]: I0910 23:23:07.174286 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c1a2b944-2e85-4183-b76b-a8410e249012" (UID: "c1a2b944-2e85-4183-b76b-a8410e249012"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 23:23:07.260238 kubelet[2672]: I0910 23:23:07.260178 2672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktgsn\" (UniqueName: \"kubernetes.io/projected/c1a2b944-2e85-4183-b76b-a8410e249012-kube-api-access-ktgsn\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:07.260238 kubelet[2672]: I0910 23:23:07.260224 2672 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:07.260238 kubelet[2672]: I0910 23:23:07.260239 2672 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1a2b944-2e85-4183-b76b-a8410e249012-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:07.326944 systemd[1]: var-lib-kubelet-pods-c1a2b944\x2d2e85\x2d4183\x2db76b\x2da8410e249012-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dktgsn.mount: Deactivated successfully. Sep 10 23:23:07.327043 systemd[1]: var-lib-kubelet-pods-c1a2b944\x2d2e85\x2d4183\x2db76b\x2da8410e249012-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:23:07.888045 systemd[1]: Removed slice kubepods-besteffort-podc1a2b944_2e85_4183_b76b_a8410e249012.slice - libcontainer container kubepods-besteffort-podc1a2b944_2e85_4183_b76b_a8410e249012.slice. Sep 10 23:23:07.991427 kubelet[2672]: I0910 23:23:07.991279 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:08.058660 systemd[1]: Created slice kubepods-besteffort-pod795cd1a0_9860_4951_9a7f_d932c8e77706.slice - libcontainer container kubepods-besteffort-pod795cd1a0_9860_4951_9a7f_d932c8e77706.slice. Sep 10 23:23:08.165557 kubelet[2672]: I0910 23:23:08.165428 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/795cd1a0-9860-4951-9a7f-d932c8e77706-whisker-backend-key-pair\") pod \"whisker-bd5ddc7cd-8bffb\" (UID: \"795cd1a0-9860-4951-9a7f-d932c8e77706\") " pod="calico-system/whisker-bd5ddc7cd-8bffb" Sep 10 23:23:08.165557 kubelet[2672]: I0910 23:23:08.165521 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/795cd1a0-9860-4951-9a7f-d932c8e77706-whisker-ca-bundle\") pod \"whisker-bd5ddc7cd-8bffb\" (UID: \"795cd1a0-9860-4951-9a7f-d932c8e77706\") " pod="calico-system/whisker-bd5ddc7cd-8bffb" Sep 10 23:23:08.165557 kubelet[2672]: I0910 23:23:08.165560 2672 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ct2v\" (UniqueName: \"kubernetes.io/projected/795cd1a0-9860-4951-9a7f-d932c8e77706-kube-api-access-9ct2v\") pod \"whisker-bd5ddc7cd-8bffb\" (UID: \"795cd1a0-9860-4951-9a7f-d932c8e77706\") " pod="calico-system/whisker-bd5ddc7cd-8bffb" Sep 10 23:23:08.363292 containerd[1524]: time="2025-09-10T23:23:08.363191409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd5ddc7cd-8bffb,Uid:795cd1a0-9860-4951-9a7f-d932c8e77706,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:08.568478 systemd-networkd[1464]: cali77279485411: Link UP Sep 10 23:23:08.568744 systemd-networkd[1464]: cali77279485411: Gained carrier Sep 10 23:23:08.584371 containerd[1524]: 2025-09-10 23:23:08.417 [INFO][3914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:08.584371 containerd[1524]: 2025-09-10 23:23:08.448 [INFO][3914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0 whisker-bd5ddc7cd- calico-system 795cd1a0-9860-4951-9a7f-d932c8e77706 888 0 2025-09-10 23:23:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bd5ddc7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bd5ddc7cd-8bffb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali77279485411 [] [] }} ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-" Sep 10 23:23:08.584371 containerd[1524]: 2025-09-10 23:23:08.448 [INFO][3914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.584371 containerd[1524]: 2025-09-10 23:23:08.513 [INFO][3929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" HandleID="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Workload="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.513 [INFO][3929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" HandleID="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Workload="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058b850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bd5ddc7cd-8bffb", "timestamp":"2025-09-10 23:23:08.513038227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.513 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.513 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.513 [INFO][3929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.524 [INFO][3929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" host="localhost" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.529 [INFO][3929] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.534 [INFO][3929] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.535 [INFO][3929] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.538 [INFO][3929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:08.584626 containerd[1524]: 2025-09-10 23:23:08.538 [INFO][3929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" host="localhost" Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.539 [INFO][3929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.543 [INFO][3929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" host="localhost" Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.552 [INFO][3929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" host="localhost" Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.552 [INFO][3929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" host="localhost" Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.552 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:08.584819 containerd[1524]: 2025-09-10 23:23:08.552 [INFO][3929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" HandleID="k8s-pod-network.fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Workload="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.584926 containerd[1524]: 2025-09-10 23:23:08.556 [INFO][3914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0", GenerateName:"whisker-bd5ddc7cd-", Namespace:"calico-system", SelfLink:"", UID:"795cd1a0-9860-4951-9a7f-d932c8e77706", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd5ddc7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bd5ddc7cd-8bffb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77279485411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:08.584926 containerd[1524]: 2025-09-10 23:23:08.556 [INFO][3914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.584990 containerd[1524]: 2025-09-10 23:23:08.557 [INFO][3914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77279485411 ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.584990 containerd[1524]: 2025-09-10 23:23:08.567 [INFO][3914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.585029 containerd[1524]: 2025-09-10 23:23:08.567 [INFO][3914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0", GenerateName:"whisker-bd5ddc7cd-", Namespace:"calico-system", SelfLink:"", UID:"795cd1a0-9860-4951-9a7f-d932c8e77706", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd5ddc7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b", Pod:"whisker-bd5ddc7cd-8bffb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali77279485411", MAC:"be:ee:75:35:18:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:08.585073 containerd[1524]: 2025-09-10 23:23:08.579 [INFO][3914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" Namespace="calico-system" Pod="whisker-bd5ddc7cd-8bffb" WorkloadEndpoint="localhost-k8s-whisker--bd5ddc7cd--8bffb-eth0" Sep 10 23:23:08.618763 containerd[1524]: time="2025-09-10T23:23:08.618717835Z" level=info msg="connecting to shim fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b" address="unix:///run/containerd/s/c8b37dbaccc319dfb5973f290aec85fac856bf3c0aaf81bab5f19723186f1eb9" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:08.641416 systemd[1]: Started cri-containerd-fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b.scope - libcontainer container fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b. Sep 10 23:23:08.655367 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:08.674617 containerd[1524]: time="2025-09-10T23:23:08.674577157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd5ddc7cd-8bffb,Uid:795cd1a0-9860-4951-9a7f-d932c8e77706,Namespace:calico-system,Attempt:0,} returns sandbox id \"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b\"" Sep 10 23:23:08.676170 containerd[1524]: time="2025-09-10T23:23:08.676096956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:23:09.610532 containerd[1524]: time="2025-09-10T23:23:09.609803427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:09.610532 containerd[1524]: time="2025-09-10T23:23:09.610295107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:23:09.611355 containerd[1524]: time="2025-09-10T23:23:09.611326186Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:09.613718 containerd[1524]: time="2025-09-10T23:23:09.613689905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:09.614450 containerd[1524]: time="2025-09-10T23:23:09.614403664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 938.268668ms" Sep 10 23:23:09.614450 containerd[1524]: time="2025-09-10T23:23:09.614448624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:23:09.617567 containerd[1524]: time="2025-09-10T23:23:09.617533662Z" level=info msg="CreateContainer within sandbox \"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:23:09.625965 containerd[1524]: time="2025-09-10T23:23:09.624446618Z" level=info msg="Container e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:09.634442 containerd[1524]: time="2025-09-10T23:23:09.634396971Z" level=info msg="CreateContainer within sandbox \"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0\"" Sep 10 23:23:09.635116 containerd[1524]: time="2025-09-10T23:23:09.635089891Z" level=info msg="StartContainer for \"e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0\"" Sep 10 23:23:09.636454 containerd[1524]: time="2025-09-10T23:23:09.636428010Z" level=info msg="connecting to shim e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0" address="unix:///run/containerd/s/c8b37dbaccc319dfb5973f290aec85fac856bf3c0aaf81bab5f19723186f1eb9" protocol=ttrpc version=3 Sep 10 23:23:09.670455 systemd[1]: Started cri-containerd-e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0.scope - libcontainer container e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0. Sep 10 23:23:09.709492 containerd[1524]: time="2025-09-10T23:23:09.709449004Z" level=info msg="StartContainer for \"e5b84d843a7b37d14390fd3e410f7ad1aada6465ec7f0ee660f6ebb3de4b94c0\" returns successfully" Sep 10 23:23:09.712115 containerd[1524]: time="2025-09-10T23:23:09.712026042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:23:09.880128 kubelet[2672]: I0910 23:23:09.880034 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a2b944-2e85-4183-b76b-a8410e249012" path="/var/lib/kubelet/pods/c1a2b944-2e85-4183-b76b-a8410e249012/volumes" Sep 10 23:23:09.988416 systemd-networkd[1464]: cali77279485411: Gained IPv6LL Sep 10 23:23:11.069786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519325866.mount: Deactivated successfully. Sep 10 23:23:11.119579 containerd[1524]: time="2025-09-10T23:23:11.119018234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:11.119949 containerd[1524]: time="2025-09-10T23:23:11.119742194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:23:11.120565 containerd[1524]: time="2025-09-10T23:23:11.120529513Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:11.122876 containerd[1524]: time="2025-09-10T23:23:11.122844552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:11.126945 containerd[1524]: time="2025-09-10T23:23:11.126907790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.414752148s" Sep 10 23:23:11.126987 containerd[1524]: time="2025-09-10T23:23:11.126945630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:23:11.129761 containerd[1524]: time="2025-09-10T23:23:11.129731028Z" level=info msg="CreateContainer within sandbox \"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:23:11.136741 containerd[1524]: time="2025-09-10T23:23:11.136324464Z" level=info msg="Container fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:11.144762 containerd[1524]: time="2025-09-10T23:23:11.144625460Z" level=info msg="CreateContainer within sandbox \"fca8789449d5a545493c5bb77872b50e2ee1cc7992d2aa77fb2e5845a995595b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3\"" Sep 10 23:23:11.145276 containerd[1524]: time="2025-09-10T23:23:11.145211579Z" level=info msg="StartContainer for \"fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3\"" Sep 10 23:23:11.148245 containerd[1524]: time="2025-09-10T23:23:11.148206698Z" level=info msg="connecting to shim fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3" address="unix:///run/containerd/s/c8b37dbaccc319dfb5973f290aec85fac856bf3c0aaf81bab5f19723186f1eb9" protocol=ttrpc version=3 Sep 10 23:23:11.175521 systemd[1]: Started cri-containerd-fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3.scope - libcontainer container fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3. Sep 10 23:23:11.272799 containerd[1524]: time="2025-09-10T23:23:11.272760468Z" level=info msg="StartContainer for \"fb6293b046adb854dd93efb979c19849c07c46215595d9804ec3dc356c40c3f3\" returns successfully" Sep 10 23:23:12.018291 kubelet[2672]: I0910 23:23:12.018138 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bd5ddc7cd-8bffb" podStartSLOduration=1.5663030180000002 podStartE2EDuration="4.018113251s" podCreationTimestamp="2025-09-10 23:23:08 +0000 UTC" firstStartedPulling="2025-09-10 23:23:08.675873516 +0000 UTC m=+30.896827219" lastFinishedPulling="2025-09-10 23:23:11.127683749 +0000 UTC m=+33.348637452" observedRunningTime="2025-09-10 23:23:12.016760972 +0000 UTC m=+34.237714675" watchObservedRunningTime="2025-09-10 23:23:12.018113251 +0000 UTC m=+34.239066954" Sep 10 23:23:13.878808 containerd[1524]: time="2025-09-10T23:23:13.878747022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c6b9c664-sf4mz,Uid:e1038467-e177-4f29-8af7-0857c8035031,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:13.879317 containerd[1524]: time="2025-09-10T23:23:13.879243262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-95c8s,Uid:59db97de-68b3-4a73-8c92-92bec877428c,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:13.879317 containerd[1524]: time="2025-09-10T23:23:13.879288422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-q6x8n,Uid:0bb26eb8-c7e6-4e42-9e76-36364255597b,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:14.010735 systemd-networkd[1464]: cali7b0533e80ab: Link UP Sep 10 23:23:14.011319 systemd-networkd[1464]: cali7b0533e80ab: Gained carrier Sep 10 23:23:14.025819 containerd[1524]: 2025-09-10 23:23:13.914 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:14.025819 containerd[1524]: 2025-09-10 23:23:13.937 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0 calico-apiserver-9c84896fc- calico-apiserver 0bb26eb8-c7e6-4e42-9e76-36364255597b 826 0 2025-09-10 23:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c84896fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c84896fc-q6x8n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b0533e80ab [] [] }} ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-" Sep 10 23:23:14.025819 containerd[1524]: 2025-09-10 23:23:13.937 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.025819 containerd[1524]: 2025-09-10 23:23:13.965 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.965 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000504aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c84896fc-q6x8n", "timestamp":"2025-09-10 23:23:13.9651861 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.965 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.965 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.965 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.976 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" host="localhost" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.982 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.986 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.988 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.990 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.026037 containerd[1524]: 2025-09-10 23:23:13.990 [INFO][4245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" host="localhost" Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:13.992 [INFO][4245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827 Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:13.996 [INFO][4245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" host="localhost" Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:14.002 [INFO][4245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" host="localhost" Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:14.002 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" host="localhost" Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:14.002 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:14.026249 containerd[1524]: 2025-09-10 23:23:14.002 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.026401 containerd[1524]: 2025-09-10 23:23:14.008 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0", GenerateName:"calico-apiserver-9c84896fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb26eb8-c7e6-4e42-9e76-36364255597b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c84896fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c84896fc-q6x8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b0533e80ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.026452 containerd[1524]: 2025-09-10 23:23:14.008 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.026452 containerd[1524]: 2025-09-10 23:23:14.008 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b0533e80ab ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.026452 containerd[1524]: 2025-09-10 23:23:14.011 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.026511 containerd[1524]: 2025-09-10 23:23:14.012 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0", GenerateName:"calico-apiserver-9c84896fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bb26eb8-c7e6-4e42-9e76-36364255597b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c84896fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827", Pod:"calico-apiserver-9c84896fc-q6x8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b0533e80ab", MAC:"b2:e3:44:51:ee:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.026556 containerd[1524]: 2025-09-10 23:23:14.023 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-q6x8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:14.046199 containerd[1524]: time="2025-09-10T23:23:14.046093301Z" level=info msg="connecting to shim 8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" address="unix:///run/containerd/s/9b2e1a1204a488f732e24a35a20099a60889a2abbaac412a9eea53dcc1402b17" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:14.070487 systemd[1]: Started cri-containerd-8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827.scope - libcontainer container 8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827. Sep 10 23:23:14.088034 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:14.121599 containerd[1524]: time="2025-09-10T23:23:14.121526747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-q6x8n,Uid:0bb26eb8-c7e6-4e42-9e76-36364255597b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\"" Sep 10 23:23:14.130246 systemd-networkd[1464]: cali6dd5ae6c745: Link UP Sep 10 23:23:14.130476 systemd-networkd[1464]: cali6dd5ae6c745: Gained carrier Sep 10 23:23:14.136016 containerd[1524]: time="2025-09-10T23:23:14.135975980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:23:14.151862 containerd[1524]: 2025-09-10 23:23:13.911 [INFO][4206] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:14.151862 containerd[1524]: 2025-09-10 23:23:13.935 [INFO][4206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0 calico-apiserver-9c84896fc- calico-apiserver 59db97de-68b3-4a73-8c92-92bec877428c 816 0 2025-09-10 23:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c84896fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9c84896fc-95c8s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6dd5ae6c745 [] [] }} ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-" Sep 10 23:23:14.151862 containerd[1524]: 2025-09-10 23:23:13.935 [INFO][4206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.151862 containerd[1524]: 2025-09-10 23:23:13.966 [INFO][4243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" HandleID="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Workload="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:13.966 [INFO][4243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" HandleID="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Workload="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9c84896fc-95c8s", "timestamp":"2025-09-10 23:23:13.966140179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:13.966 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.003 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.003 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.076 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" host="localhost" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.084 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.092 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.094 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.097 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.152109 containerd[1524]: 2025-09-10 23:23:14.097 [INFO][4243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" host="localhost" Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.098 [INFO][4243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27 Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.109 [INFO][4243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" host="localhost" Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.119 [INFO][4243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" host="localhost" Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.119 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" host="localhost" Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.119 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:14.152406 containerd[1524]: 2025-09-10 23:23:14.120 [INFO][4243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" HandleID="k8s-pod-network.c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Workload="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.152548 containerd[1524]: 2025-09-10 23:23:14.124 [INFO][4206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0", GenerateName:"calico-apiserver-9c84896fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"59db97de-68b3-4a73-8c92-92bec877428c", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c84896fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9c84896fc-95c8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd5ae6c745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.152633 containerd[1524]: 2025-09-10 23:23:14.126 [INFO][4206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.152633 containerd[1524]: 2025-09-10 23:23:14.126 [INFO][4206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dd5ae6c745 ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.152633 containerd[1524]: 2025-09-10 23:23:14.130 [INFO][4206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.152717 containerd[1524]: 2025-09-10 23:23:14.132 [INFO][4206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0", GenerateName:"calico-apiserver-9c84896fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"59db97de-68b3-4a73-8c92-92bec877428c", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c84896fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27", Pod:"calico-apiserver-9c84896fc-95c8s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd5ae6c745", MAC:"16:86:b0:65:d1:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.152781 containerd[1524]: 2025-09-10 23:23:14.146 [INFO][4206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" Namespace="calico-apiserver" Pod="calico-apiserver-9c84896fc-95c8s" WorkloadEndpoint="localhost-k8s-calico--apiserver--9c84896fc--95c8s-eth0" Sep 10 23:23:14.180392 containerd[1524]: time="2025-09-10T23:23:14.180344519Z" level=info msg="connecting to shim c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27" address="unix:///run/containerd/s/879b33abaa8edb32e3f922bf8c38816a3205e2d44723b74c27250273ee41b71f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:14.210563 systemd[1]: Started cri-containerd-c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27.scope - libcontainer container c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27. Sep 10 23:23:14.217826 systemd-networkd[1464]: calid9f0292f4f5: Link UP Sep 10 23:23:14.218756 systemd-networkd[1464]: calid9f0292f4f5: Gained carrier Sep 10 23:23:14.228184 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:14.260323 containerd[1524]: 2025-09-10 23:23:13.909 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:14.260323 containerd[1524]: 2025-09-10 23:23:13.938 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0 calico-kube-controllers-64c6b9c664- calico-system e1038467-e177-4f29-8af7-0857c8035031 822 0 2025-09-10 23:22:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64c6b9c664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64c6b9c664-sf4mz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid9f0292f4f5 [] [] }} ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-" Sep 10 23:23:14.260323 containerd[1524]: 2025-09-10 23:23:13.938 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.260323 containerd[1524]: 2025-09-10 23:23:13.973 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" HandleID="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Workload="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:13.973 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" HandleID="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Workload="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64c6b9c664-sf4mz", "timestamp":"2025-09-10 23:23:13.973827656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:13.973 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.119 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.120 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.176 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" host="localhost" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.184 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.193 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.196 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.198 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.260559 containerd[1524]: 2025-09-10 23:23:14.199 [INFO][4254] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" host="localhost" Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.200 [INFO][4254] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479 Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.206 [INFO][4254] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" host="localhost" Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.212 [INFO][4254] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" host="localhost" Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.212 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" host="localhost" Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.212 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:14.260794 containerd[1524]: 2025-09-10 23:23:14.212 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" HandleID="k8s-pod-network.12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Workload="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.260916 containerd[1524]: 2025-09-10 23:23:14.215 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0", GenerateName:"calico-kube-controllers-64c6b9c664-", Namespace:"calico-system", SelfLink:"", UID:"e1038467-e177-4f29-8af7-0857c8035031", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c6b9c664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64c6b9c664-sf4mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9f0292f4f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.260962 containerd[1524]: 2025-09-10 23:23:14.216 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.260962 containerd[1524]: 2025-09-10 23:23:14.216 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9f0292f4f5 ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.260962 containerd[1524]: 2025-09-10 23:23:14.219 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.261017 containerd[1524]: 2025-09-10 23:23:14.220 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0", GenerateName:"calico-kube-controllers-64c6b9c664-", Namespace:"calico-system", SelfLink:"", UID:"e1038467-e177-4f29-8af7-0857c8035031", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c6b9c664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479", Pod:"calico-kube-controllers-64c6b9c664-sf4mz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9f0292f4f5", MAC:"be:c3:23:35:39:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.261064 containerd[1524]: 2025-09-10 23:23:14.258 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" Namespace="calico-system" Pod="calico-kube-controllers-64c6b9c664-sf4mz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64c6b9c664--sf4mz-eth0" Sep 10 23:23:14.261798 containerd[1524]: time="2025-09-10T23:23:14.261724962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c84896fc-95c8s,Uid:59db97de-68b3-4a73-8c92-92bec877428c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27\"" Sep 10 23:23:14.300971 containerd[1524]: time="2025-09-10T23:23:14.300763824Z" level=info msg="connecting to shim 12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479" address="unix:///run/containerd/s/68c61dc27ae07df5a18623e0546422624f3e257e8ca1524b58120ada0c9b0c47" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:14.330474 systemd[1]: Started cri-containerd-12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479.scope - libcontainer container 12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479. Sep 10 23:23:14.350547 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:14.371887 containerd[1524]: time="2025-09-10T23:23:14.371835111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c6b9c664-sf4mz,Uid:e1038467-e177-4f29-8af7-0857c8035031,Namespace:calico-system,Attempt:0,} returns sandbox id \"12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479\"" Sep 10 23:23:14.885672 containerd[1524]: time="2025-09-10T23:23:14.885632274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6df754fc-c7k7k,Uid:33898bb5-8476-49e9-ae84-9e80452648c1,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:14.986361 systemd-networkd[1464]: caliafb2057aac4: Link UP Sep 10 23:23:14.986560 systemd-networkd[1464]: caliafb2057aac4: Gained carrier Sep 10 23:23:14.997691 containerd[1524]: 2025-09-10 23:23:14.906 [INFO][4450] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:14.997691 containerd[1524]: 2025-09-10 23:23:14.920 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0 calico-apiserver-c6df754fc- calico-apiserver 33898bb5-8476-49e9-ae84-9e80452648c1 824 0 2025-09-10 23:22:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6df754fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c6df754fc-c7k7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliafb2057aac4 [] [] }} ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-" Sep 10 23:23:14.997691 containerd[1524]: 2025-09-10 23:23:14.920 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.997691 containerd[1524]: 2025-09-10 23:23:14.946 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" HandleID="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Workload="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.946 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" HandleID="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Workload="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c6df754fc-c7k7k", "timestamp":"2025-09-10 23:23:14.945987926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.946 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.946 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.946 [INFO][4464] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.955 [INFO][4464] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" host="localhost" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.959 [INFO][4464] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.964 [INFO][4464] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.967 [INFO][4464] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.970 [INFO][4464] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:14.997971 containerd[1524]: 2025-09-10 23:23:14.970 [INFO][4464] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" host="localhost" Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.972 [INFO][4464] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1 Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.975 [INFO][4464] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" host="localhost" Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.981 [INFO][4464] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" host="localhost" Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.981 [INFO][4464] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" host="localhost" Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.982 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:14.998193 containerd[1524]: 2025-09-10 23:23:14.982 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" HandleID="k8s-pod-network.30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Workload="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.998336 containerd[1524]: 2025-09-10 23:23:14.984 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0", GenerateName:"calico-apiserver-c6df754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"33898bb5-8476-49e9-ae84-9e80452648c1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6df754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c6df754fc-c7k7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb2057aac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.998387 containerd[1524]: 2025-09-10 23:23:14.984 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.998387 containerd[1524]: 2025-09-10 23:23:14.984 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafb2057aac4 ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.998387 containerd[1524]: 2025-09-10 23:23:14.986 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:14.998447 containerd[1524]: 2025-09-10 23:23:14.986 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0", GenerateName:"calico-apiserver-c6df754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"33898bb5-8476-49e9-ae84-9e80452648c1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6df754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1", Pod:"calico-apiserver-c6df754fc-c7k7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb2057aac4", MAC:"c2:53:33:10:04:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:14.998498 containerd[1524]: 2025-09-10 23:23:14.995 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" Namespace="calico-apiserver" Pod="calico-apiserver-c6df754fc-c7k7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6df754fc--c7k7k-eth0" Sep 10 23:23:15.215676 containerd[1524]: time="2025-09-10T23:23:15.214944208Z" level=info msg="connecting to shim 30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1" address="unix:///run/containerd/s/e7a04b650f7e44d3eb538bc00eeaaa285a0040856059aa71f9a4b1c2e5632d69" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:15.256003 systemd[1]: Started cri-containerd-30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1.scope - libcontainer container 30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1. Sep 10 23:23:15.273529 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:15.299508 containerd[1524]: time="2025-09-10T23:23:15.299469971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6df754fc-c7k7k,Uid:33898bb5-8476-49e9-ae84-9e80452648c1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1\"" Sep 10 23:23:15.301370 systemd-networkd[1464]: calid9f0292f4f5: Gained IPv6LL Sep 10 23:23:15.684397 systemd-networkd[1464]: cali6dd5ae6c745: Gained IPv6LL Sep 10 23:23:15.721107 containerd[1524]: time="2025-09-10T23:23:15.721068309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.722378 containerd[1524]: time="2025-09-10T23:23:15.722356548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:23:15.723197 containerd[1524]: time="2025-09-10T23:23:15.723146588Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.724802 containerd[1524]: time="2025-09-10T23:23:15.724762907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.725535 containerd[1524]: time="2025-09-10T23:23:15.725508987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.589300127s" Sep 10 23:23:15.725703 containerd[1524]: time="2025-09-10T23:23:15.725610547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:23:15.726642 containerd[1524]: time="2025-09-10T23:23:15.726614947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:23:15.729211 containerd[1524]: time="2025-09-10T23:23:15.729180386Z" level=info msg="CreateContainer within sandbox \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:15.736516 containerd[1524]: time="2025-09-10T23:23:15.734736543Z" level=info msg="Container ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:15.742535 containerd[1524]: time="2025-09-10T23:23:15.742495700Z" level=info msg="CreateContainer within sandbox \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\"" Sep 10 23:23:15.743149 containerd[1524]: time="2025-09-10T23:23:15.743123659Z" level=info msg="StartContainer for \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\"" Sep 10 23:23:15.744095 containerd[1524]: time="2025-09-10T23:23:15.744059859Z" level=info msg="connecting to shim ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c" address="unix:///run/containerd/s/9b2e1a1204a488f732e24a35a20099a60889a2abbaac412a9eea53dcc1402b17" protocol=ttrpc version=3 Sep 10 23:23:15.762425 systemd[1]: Started cri-containerd-ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c.scope - libcontainer container ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c. Sep 10 23:23:15.794082 containerd[1524]: time="2025-09-10T23:23:15.794009117Z" level=info msg="StartContainer for \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" returns successfully" Sep 10 23:23:15.876440 systemd-networkd[1464]: cali7b0533e80ab: Gained IPv6LL Sep 10 23:23:15.877900 kubelet[2672]: E0910 23:23:15.877837 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:15.878742 containerd[1524]: time="2025-09-10T23:23:15.878191921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxtpf,Uid:b70e0103-15b0-42d4-8bb9-9869a6a405c8,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:15.880804 containerd[1524]: time="2025-09-10T23:23:15.879488760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6vtm5,Uid:81cb8fd7-cecd-405f-9a31-d6d2993fb447,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:16.013403 containerd[1524]: time="2025-09-10T23:23:16.012865543Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:16.029273 containerd[1524]: time="2025-09-10T23:23:16.028785337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:23:16.033296 containerd[1524]: time="2025-09-10T23:23:16.032389495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 305.742828ms" Sep 10 23:23:16.033296 containerd[1524]: time="2025-09-10T23:23:16.032428295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:23:16.035522 containerd[1524]: time="2025-09-10T23:23:16.034793494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:23:16.037492 containerd[1524]: time="2025-09-10T23:23:16.037462293Z" level=info msg="CreateContainer within sandbox \"c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:16.045168 systemd-networkd[1464]: cali7f704c8376d: Link UP Sep 10 23:23:16.045367 systemd-networkd[1464]: cali7f704c8376d: Gained carrier Sep 10 23:23:16.054806 kubelet[2672]: I0910 23:23:16.053913 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c84896fc-q6x8n" podStartSLOduration=23.463085279 podStartE2EDuration="25.053892366s" podCreationTimestamp="2025-09-10 23:22:51 +0000 UTC" firstStartedPulling="2025-09-10 23:23:14.13567738 +0000 UTC m=+36.356631083" lastFinishedPulling="2025-09-10 23:23:15.726484467 +0000 UTC m=+37.947438170" observedRunningTime="2025-09-10 23:23:16.048323729 +0000 UTC m=+38.269277432" watchObservedRunningTime="2025-09-10 23:23:16.053892366 +0000 UTC m=+38.274846069" Sep 10 23:23:16.055179 containerd[1524]: time="2025-09-10T23:23:16.054052726Z" level=info msg="Container 789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:16.056430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435829657.mount: Deactivated successfully. Sep 10 23:23:16.069204 containerd[1524]: 2025-09-10 23:23:15.925 [INFO][4583] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:16.069204 containerd[1524]: 2025-09-10 23:23:15.960 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0 coredns-7c65d6cfc9- kube-system b70e0103-15b0-42d4-8bb9-9869a6a405c8 820 0 2025-09-10 23:22:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zxtpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f704c8376d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-" Sep 10 23:23:16.069204 containerd[1524]: 2025-09-10 23:23:15.960 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.069204 containerd[1524]: 2025-09-10 23:23:15.991 [INFO][4619] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" HandleID="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Workload="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:15.992 [INFO][4619] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" HandleID="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Workload="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zxtpf", "timestamp":"2025-09-10 23:23:15.991925912 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:15.992 [INFO][4619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:15.992 [INFO][4619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:15.992 [INFO][4619] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.003 [INFO][4619] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" host="localhost" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.008 [INFO][4619] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.015 [INFO][4619] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.018 [INFO][4619] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.022 [INFO][4619] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.069650 containerd[1524]: 2025-09-10 23:23:16.022 [INFO][4619] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" host="localhost" Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.023 [INFO][4619] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35 Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.027 [INFO][4619] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" host="localhost" Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4619] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" host="localhost" Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4619] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" host="localhost" Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:16.069861 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4619] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" HandleID="k8s-pod-network.dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Workload="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.069969 containerd[1524]: 2025-09-10 23:23:16.039 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b70e0103-15b0-42d4-8bb9-9869a6a405c8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zxtpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f704c8376d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.070036 containerd[1524]: 2025-09-10 23:23:16.039 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.070036 containerd[1524]: 2025-09-10 23:23:16.039 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f704c8376d ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.070036 containerd[1524]: 2025-09-10 23:23:16.044 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.070099 containerd[1524]: 2025-09-10 23:23:16.046 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b70e0103-15b0-42d4-8bb9-9869a6a405c8", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35", Pod:"coredns-7c65d6cfc9-zxtpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f704c8376d", MAC:"1a:3c:8d:a1:fd:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.070099 containerd[1524]: 2025-09-10 23:23:16.066 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zxtpf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zxtpf-eth0" Sep 10 23:23:16.071177 containerd[1524]: time="2025-09-10T23:23:16.071125279Z" level=info msg="CreateContainer within sandbox \"c6a833bfaef2708452e4326b97f8e1622a9998b71e91b0e6ce452082ac362f27\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410\"" Sep 10 23:23:16.071698 containerd[1524]: time="2025-09-10T23:23:16.071654279Z" level=info msg="StartContainer for \"789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410\"" Sep 10 23:23:16.074214 containerd[1524]: time="2025-09-10T23:23:16.073746878Z" level=info msg="connecting to shim 789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410" address="unix:///run/containerd/s/879b33abaa8edb32e3f922bf8c38816a3205e2d44723b74c27250273ee41b71f" protocol=ttrpc version=3 Sep 10 23:23:16.102616 systemd[1]: Started cri-containerd-789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410.scope - libcontainer container 789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410. Sep 10 23:23:16.147302 containerd[1524]: time="2025-09-10T23:23:16.146856809Z" level=info msg="connecting to shim dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35" address="unix:///run/containerd/s/87fb1a9bab533de4711441912d15e24a280bd0e11ab09f5112f2d51e01545b18" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:16.169889 systemd[1]: Started cri-containerd-dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35.scope - libcontainer container dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35. Sep 10 23:23:16.185683 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:16.189944 systemd-networkd[1464]: calid95dd14d7a9: Link UP Sep 10 23:23:16.190133 systemd-networkd[1464]: calid95dd14d7a9: Gained carrier Sep 10 23:23:16.201599 containerd[1524]: time="2025-09-10T23:23:16.201568866Z" level=info msg="StartContainer for \"789fece75aa8198ad24b9f51ca7f0199df1c852050aa6eb4458691c06368d410\" returns successfully" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:15.935 [INFO][4595] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:15.959 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--6vtm5-eth0 goldmane-7988f88666- calico-system 81cb8fd7-cecd-405f-9a31-d6d2993fb447 823 0 2025-09-10 23:22:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-6vtm5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid95dd14d7a9 [] [] }} ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:15.959 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.003 [INFO][4621] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" HandleID="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Workload="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.003 [INFO][4621] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" HandleID="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Workload="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001214d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-6vtm5", "timestamp":"2025-09-10 23:23:16.003509107 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.003 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.036 [INFO][4621] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.114 [INFO][4621] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.127 [INFO][4621] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.133 [INFO][4621] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.139 [INFO][4621] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.142 [INFO][4621] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.142 [INFO][4621] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.143 [INFO][4621] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.163 [INFO][4621] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.182 [INFO][4621] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.183 [INFO][4621] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" host="localhost" Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.183 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:16.217563 containerd[1524]: 2025-09-10 23:23:16.183 [INFO][4621] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" HandleID="k8s-pod-network.f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Workload="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.185 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6vtm5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"81cb8fd7-cecd-405f-9a31-d6d2993fb447", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-6vtm5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid95dd14d7a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.185 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.185 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid95dd14d7a9 ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.188 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.189 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--6vtm5-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"81cb8fd7-cecd-405f-9a31-d6d2993fb447", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc", Pod:"goldmane-7988f88666-6vtm5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid95dd14d7a9", MAC:"2e:ab:cb:6b:1c:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.218092 containerd[1524]: 2025-09-10 23:23:16.210 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" Namespace="calico-system" Pod="goldmane-7988f88666-6vtm5" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--6vtm5-eth0" Sep 10 23:23:16.221983 containerd[1524]: time="2025-09-10T23:23:16.221922018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zxtpf,Uid:b70e0103-15b0-42d4-8bb9-9869a6a405c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35\"" Sep 10 23:23:16.223351 kubelet[2672]: E0910 23:23:16.223098 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:16.225112 containerd[1524]: time="2025-09-10T23:23:16.225016377Z" level=info msg="CreateContainer within sandbox \"dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:23:16.237844 containerd[1524]: time="2025-09-10T23:23:16.237800772Z" level=info msg="Container dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:16.242237 containerd[1524]: time="2025-09-10T23:23:16.241755970Z" level=info msg="connecting to shim f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc" address="unix:///run/containerd/s/7316c88d8e9bb2443681e916e204766979684bd124b1e711cf7f8216140439f3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:16.245847 containerd[1524]: time="2025-09-10T23:23:16.245811529Z" level=info msg="CreateContainer within sandbox \"dc023cc9820bef5812564027b1bb240573b69744eba9f2d0f7fb3a5cb4228e35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613\"" Sep 10 23:23:16.246818 containerd[1524]: time="2025-09-10T23:23:16.246778928Z" level=info msg="StartContainer for \"dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613\"" Sep 10 23:23:16.249271 containerd[1524]: time="2025-09-10T23:23:16.247850128Z" level=info msg="connecting to shim dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613" address="unix:///run/containerd/s/87fb1a9bab533de4711441912d15e24a280bd0e11ab09f5112f2d51e01545b18" protocol=ttrpc version=3 Sep 10 23:23:16.270436 systemd[1]: Started cri-containerd-dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613.scope - libcontainer container dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613. Sep 10 23:23:16.274773 systemd[1]: Started cri-containerd-f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc.scope - libcontainer container f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc. Sep 10 23:23:16.294994 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:16.320043 containerd[1524]: time="2025-09-10T23:23:16.319988938Z" level=info msg="StartContainer for \"dd38207b4873dd607320e1f2bebeac73479be7c7b26d4e1403a9a5499e9cd613\" returns successfully" Sep 10 23:23:16.328285 containerd[1524]: time="2025-09-10T23:23:16.328221855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-6vtm5,Uid:81cb8fd7-cecd-405f-9a31-d6d2993fb447,Namespace:calico-system,Attempt:0,} returns sandbox id \"f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc\"" Sep 10 23:23:16.580367 systemd-networkd[1464]: caliafb2057aac4: Gained IPv6LL Sep 10 23:23:16.686582 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). Sep 10 23:23:16.757863 sshd[4812]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:16.759497 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:16.763413 systemd-logind[1504]: New session 8 of user core. Sep 10 23:23:16.769435 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:23:16.881192 kubelet[2672]: E0910 23:23:16.881083 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:16.882519 containerd[1524]: time="2025-09-10T23:23:16.881552791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59n6n,Uid:56b76e01-e81c-4847-8f88-e9e155779575,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:16.883026 containerd[1524]: time="2025-09-10T23:23:16.882439790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w4tx7,Uid:eb59fb67-2714-487d-8245-dc796ba02d18,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:17.041935 kubelet[2672]: I0910 23:23:17.041906 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:17.043459 kubelet[2672]: E0910 23:23:17.043275 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:17.065836 kubelet[2672]: I0910 23:23:17.065780 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zxtpf" podStartSLOduration=35.065763758 podStartE2EDuration="35.065763758s" podCreationTimestamp="2025-09-10 23:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:17.065529718 +0000 UTC m=+39.286483421" watchObservedRunningTime="2025-09-10 23:23:17.065763758 +0000 UTC m=+39.286717421" Sep 10 23:23:17.084107 kubelet[2672]: I0910 23:23:17.081974 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c84896fc-95c8s" podStartSLOduration=24.313961457 podStartE2EDuration="26.081957471s" podCreationTimestamp="2025-09-10 23:22:51 +0000 UTC" firstStartedPulling="2025-09-10 23:23:14.2660602 +0000 UTC m=+36.487013903" lastFinishedPulling="2025-09-10 23:23:16.034056254 +0000 UTC m=+38.255009917" observedRunningTime="2025-09-10 23:23:17.081015592 +0000 UTC m=+39.301969255" watchObservedRunningTime="2025-09-10 23:23:17.081957471 +0000 UTC m=+39.302911174" Sep 10 23:23:17.126973 systemd-networkd[1464]: cali2d6195efbf4: Link UP Sep 10 23:23:17.128713 sshd[4815]: Connection closed by 10.0.0.1 port 37738 Sep 10 23:23:17.128486 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:17.129370 systemd-networkd[1464]: cali2d6195efbf4: Gained carrier Sep 10 23:23:17.135578 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:37738.service: Deactivated successfully. Sep 10 23:23:17.141323 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:23:17.144168 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:23:17.145984 systemd-logind[1504]: Removed session 8. Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:16.919 [INFO][4834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:16.949 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0 coredns-7c65d6cfc9- kube-system eb59fb67-2714-487d-8245-dc796ba02d18 813 0 2025-09-10 23:22:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-w4tx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d6195efbf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:16.949 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.033 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" HandleID="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Workload="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.033 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" HandleID="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Workload="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000592930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-w4tx7", "timestamp":"2025-09-10 23:23:17.03354365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.033 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.033 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.033 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.053 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.063 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.071 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.074 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.082 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.082 [INFO][4866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.086 [INFO][4866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.090 [INFO][4866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.098 [INFO][4866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.098 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" host="localhost" Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.098 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:17.146217 containerd[1524]: 2025-09-10 23:23:17.098 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" HandleID="k8s-pod-network.d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Workload="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.115 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb59fb67-2714-487d-8245-dc796ba02d18", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-w4tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d6195efbf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.115 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.116 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d6195efbf4 ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.124 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.125 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb59fb67-2714-487d-8245-dc796ba02d18", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc", Pod:"coredns-7c65d6cfc9-w4tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d6195efbf4", MAC:"72:1e:1c:be:2e:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:17.146936 containerd[1524]: 2025-09-10 23:23:17.139 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w4tx7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w4tx7-eth0" Sep 10 23:23:17.156412 systemd-networkd[1464]: cali7f704c8376d: Gained IPv6LL Sep 10 23:23:17.211504 containerd[1524]: time="2025-09-10T23:23:17.211452622Z" level=info msg="connecting to shim d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc" address="unix:///run/containerd/s/0b69ca1801c39d48fcdf3dca6ac6eb0ab971358160c4093920663c750a13dd1b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:17.234232 systemd-networkd[1464]: calibaa01e08e1f: Link UP Sep 10 23:23:17.236414 systemd-networkd[1464]: calibaa01e08e1f: Gained carrier Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:16.990 [INFO][4831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.015 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--59n6n-eth0 csi-node-driver- calico-system 56b76e01-e81c-4847-8f88-e9e155779575 714 0 2025-09-10 23:22:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-59n6n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibaa01e08e1f [] [] }} ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.015 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.080 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" HandleID="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Workload="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.081 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" HandleID="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Workload="localhost-k8s-csi--node--driver--59n6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000504b30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-59n6n", "timestamp":"2025-09-10 23:23:17.080956792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.081 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.099 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.099 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.155 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.174 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.190 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.196 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.202 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.202 [INFO][4876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.209 [INFO][4876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2 Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.214 [INFO][4876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.225 [INFO][4876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.225 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" host="localhost" Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.225 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:17.254280 containerd[1524]: 2025-09-10 23:23:17.225 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" HandleID="k8s-pod-network.393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Workload="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.231 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--59n6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56b76e01-e81c-4847-8f88-e9e155779575", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-59n6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibaa01e08e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.231 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.231 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibaa01e08e1f ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.232 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.233 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--59n6n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56b76e01-e81c-4847-8f88-e9e155779575", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2", Pod:"csi-node-driver-59n6n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibaa01e08e1f", MAC:"6a:1f:77:b5:57:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:17.254850 containerd[1524]: 2025-09-10 23:23:17.251 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" Namespace="calico-system" Pod="csi-node-driver-59n6n" WorkloadEndpoint="localhost-k8s-csi--node--driver--59n6n-eth0" Sep 10 23:23:17.274443 systemd[1]: Started cri-containerd-d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc.scope - libcontainer container d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc. Sep 10 23:23:17.296803 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:17.342205 containerd[1524]: time="2025-09-10T23:23:17.342146732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w4tx7,Uid:eb59fb67-2714-487d-8245-dc796ba02d18,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc\"" Sep 10 23:23:17.343781 kubelet[2672]: E0910 23:23:17.343739 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:17.352868 containerd[1524]: time="2025-09-10T23:23:17.352823528Z" level=info msg="connecting to shim 393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2" address="unix:///run/containerd/s/2d6816c0966357c5d05c6026070a00e3dd75c292b705b7b0add957deefc4429c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:17.360564 containerd[1524]: time="2025-09-10T23:23:17.360507725Z" level=info msg="CreateContainer within sandbox \"d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:23:17.373022 containerd[1524]: time="2025-09-10T23:23:17.372986681Z" level=info msg="Container 7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:17.378392 systemd[1]: Started cri-containerd-393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2.scope - libcontainer container 393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2. Sep 10 23:23:17.423822 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:17.433788 containerd[1524]: time="2025-09-10T23:23:17.433747298Z" level=info msg="CreateContainer within sandbox \"d2cf021f1f616eb63cc64fd43b6820cef56635dec770940e854f8970e774d4fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a\"" Sep 10 23:23:17.434432 containerd[1524]: time="2025-09-10T23:23:17.434402217Z" level=info msg="StartContainer for \"7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a\"" Sep 10 23:23:17.438832 containerd[1524]: time="2025-09-10T23:23:17.438800056Z" level=info msg="connecting to shim 7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a" address="unix:///run/containerd/s/0b69ca1801c39d48fcdf3dca6ac6eb0ab971358160c4093920663c750a13dd1b" protocol=ttrpc version=3 Sep 10 23:23:17.443814 containerd[1524]: time="2025-09-10T23:23:17.443763174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59n6n,Uid:56b76e01-e81c-4847-8f88-e9e155779575,Namespace:calico-system,Attempt:0,} returns sandbox id \"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2\"" Sep 10 23:23:17.465459 systemd[1]: Started cri-containerd-7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a.scope - libcontainer container 7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a. Sep 10 23:23:17.501611 containerd[1524]: time="2025-09-10T23:23:17.501576032Z" level=info msg="StartContainer for \"7656bfcec6ee65f2b3e9f45780b8004d21773a7bf3e2e05ceab265d94732b50a\" returns successfully" Sep 10 23:23:17.924507 systemd-networkd[1464]: calid95dd14d7a9: Gained IPv6LL Sep 10 23:23:18.048225 kubelet[2672]: E0910 23:23:18.048173 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:18.055505 kubelet[2672]: I0910 23:23:18.055367 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:18.057077 kubelet[2672]: E0910 23:23:18.056475 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:18.070094 kubelet[2672]: I0910 23:23:18.070042 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w4tx7" podStartSLOduration=36.070028897 podStartE2EDuration="36.070028897s" podCreationTimestamp="2025-09-10 23:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:18.070022817 +0000 UTC m=+40.290976520" watchObservedRunningTime="2025-09-10 23:23:18.070028897 +0000 UTC m=+40.290982600" Sep 10 23:23:18.197469 containerd[1524]: time="2025-09-10T23:23:18.197311332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:18.199052 containerd[1524]: time="2025-09-10T23:23:18.198494451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:23:18.200728 containerd[1524]: time="2025-09-10T23:23:18.200689331Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:18.204696 containerd[1524]: time="2025-09-10T23:23:18.203837729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:18.204696 containerd[1524]: time="2025-09-10T23:23:18.204497129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.169665195s" Sep 10 23:23:18.204696 containerd[1524]: time="2025-09-10T23:23:18.204525249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:23:18.206595 containerd[1524]: time="2025-09-10T23:23:18.206279689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:23:18.218007 containerd[1524]: time="2025-09-10T23:23:18.217958964Z" level=info msg="CreateContainer within sandbox \"12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:23:18.228043 containerd[1524]: time="2025-09-10T23:23:18.227994081Z" level=info msg="Container 6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:18.240224 containerd[1524]: time="2025-09-10T23:23:18.240170996Z" level=info msg="CreateContainer within sandbox \"12185814df392be0474c01bb7f5760394f4619e831083a682d4be7c946d4c479\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\"" Sep 10 23:23:18.240922 containerd[1524]: time="2025-09-10T23:23:18.240898236Z" level=info msg="StartContainer for \"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\"" Sep 10 23:23:18.247488 containerd[1524]: time="2025-09-10T23:23:18.247248514Z" level=info msg="connecting to shim 6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694" address="unix:///run/containerd/s/68c61dc27ae07df5a18623e0546422624f3e257e8ca1524b58120ada0c9b0c47" protocol=ttrpc version=3 Sep 10 23:23:18.275489 systemd[1]: Started cri-containerd-6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694.scope - libcontainer container 6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694. Sep 10 23:23:18.393298 containerd[1524]: time="2025-09-10T23:23:18.393203422Z" level=info msg="StartContainer for \"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\" returns successfully" Sep 10 23:23:18.454871 containerd[1524]: time="2025-09-10T23:23:18.454423400Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:18.456446 containerd[1524]: time="2025-09-10T23:23:18.456419319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:23:18.459691 containerd[1524]: time="2025-09-10T23:23:18.459653358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 252.92243ms" Sep 10 23:23:18.459759 containerd[1524]: time="2025-09-10T23:23:18.459697398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:23:18.461425 containerd[1524]: time="2025-09-10T23:23:18.461394878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:23:18.463122 containerd[1524]: time="2025-09-10T23:23:18.462765957Z" level=info msg="CreateContainer within sandbox \"30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:18.474297 containerd[1524]: time="2025-09-10T23:23:18.473405313Z" level=info msg="Container dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:18.485690 containerd[1524]: time="2025-09-10T23:23:18.485642109Z" level=info msg="CreateContainer within sandbox \"30b7950a6da22c42b4c130476009318b69e7c73511bf79dea550d19b465d3da1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9\"" Sep 10 23:23:18.486315 containerd[1524]: time="2025-09-10T23:23:18.486288269Z" level=info msg="StartContainer for \"dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9\"" Sep 10 23:23:18.489058 containerd[1524]: time="2025-09-10T23:23:18.489025788Z" level=info msg="connecting to shim dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9" address="unix:///run/containerd/s/e7a04b650f7e44d3eb538bc00eeaaa285a0040856059aa71f9a4b1c2e5632d69" protocol=ttrpc version=3 Sep 10 23:23:18.512473 systemd[1]: Started cri-containerd-dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9.scope - libcontainer container dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9. Sep 10 23:23:18.548438 containerd[1524]: time="2025-09-10T23:23:18.548322207Z" level=info msg="StartContainer for \"dbf2462c86b0fd3e3ef630154b245a012c233c3cabfe8841fd2ec61212ca73e9\" returns successfully" Sep 10 23:23:18.949426 systemd-networkd[1464]: cali2d6195efbf4: Gained IPv6LL Sep 10 23:23:18.981196 kubelet[2672]: I0910 23:23:18.981155 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:19.067628 kubelet[2672]: E0910 23:23:19.067554 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:19.067628 kubelet[2672]: E0910 23:23:19.067617 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:19.077804 systemd-networkd[1464]: calibaa01e08e1f: Gained IPv6LL Sep 10 23:23:19.093369 kubelet[2672]: I0910 23:23:19.093300 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c6df754fc-c7k7k" podStartSLOduration=23.934444706 podStartE2EDuration="27.093280814s" podCreationTimestamp="2025-09-10 23:22:52 +0000 UTC" firstStartedPulling="2025-09-10 23:23:15.30228185 +0000 UTC m=+37.523235553" lastFinishedPulling="2025-09-10 23:23:18.461117998 +0000 UTC m=+40.682071661" observedRunningTime="2025-09-10 23:23:19.091805455 +0000 UTC m=+41.312759158" watchObservedRunningTime="2025-09-10 23:23:19.093280814 +0000 UTC m=+41.314234517" Sep 10 23:23:19.093914 kubelet[2672]: I0910 23:23:19.093619 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64c6b9c664-sf4mz" podStartSLOduration=20.261293236 podStartE2EDuration="24.093613134s" podCreationTimestamp="2025-09-10 23:22:55 +0000 UTC" firstStartedPulling="2025-09-10 23:23:14.373084751 +0000 UTC m=+36.594038454" lastFinishedPulling="2025-09-10 23:23:18.205404649 +0000 UTC m=+40.426358352" observedRunningTime="2025-09-10 23:23:19.081481778 +0000 UTC m=+41.302435481" watchObservedRunningTime="2025-09-10 23:23:19.093613134 +0000 UTC m=+41.314566837" Sep 10 23:23:19.134368 containerd[1524]: time="2025-09-10T23:23:19.133618121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\" id:\"036b4fbd06e98bc847d2def5e071402c1bbe09ad365b0f15525db5262c1ad91d\" pid:5178 exit_status:1 exited_at:{seconds:1757546599 nanos:123230084}" Sep 10 23:23:19.213935 containerd[1524]: time="2025-09-10T23:23:19.213809374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\" id:\"5ec1f3822fa7d82cf04ad50fdfd5ec8ddc4d7d24923b2fc11605b17265451bf2\" pid:5208 exit_status:1 exited_at:{seconds:1757546599 nanos:212935814}" Sep 10 23:23:19.925088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688727561.mount: Deactivated successfully. Sep 10 23:23:20.068323 kubelet[2672]: I0910 23:23:20.068290 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:20.069102 kubelet[2672]: I0910 23:23:20.068550 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:20.069102 kubelet[2672]: E0910 23:23:20.068792 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:20.070371 kubelet[2672]: E0910 23:23:20.070345 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:20.387982 containerd[1524]: time="2025-09-10T23:23:20.387836910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:20.389022 containerd[1524]: time="2025-09-10T23:23:20.388441949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:23:20.389495 containerd[1524]: time="2025-09-10T23:23:20.389472429Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:20.391654 containerd[1524]: time="2025-09-10T23:23:20.391609748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:20.392364 containerd[1524]: time="2025-09-10T23:23:20.392337108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.93090591s" Sep 10 23:23:20.392419 containerd[1524]: time="2025-09-10T23:23:20.392370588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:23:20.394089 containerd[1524]: time="2025-09-10T23:23:20.394025588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:23:20.396126 containerd[1524]: time="2025-09-10T23:23:20.396093627Z" level=info msg="CreateContainer within sandbox \"f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:23:20.404280 containerd[1524]: time="2025-09-10T23:23:20.403888785Z" level=info msg="Container 2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:20.413316 containerd[1524]: time="2025-09-10T23:23:20.413253902Z" level=info msg="CreateContainer within sandbox \"f40703ac62ce0b69dd70e0c8a4dc6474d58102d00f362ee4c20d9d86ecda80dc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\"" Sep 10 23:23:20.414277 containerd[1524]: time="2025-09-10T23:23:20.413795661Z" level=info msg="StartContainer for \"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\"" Sep 10 23:23:20.417115 containerd[1524]: time="2025-09-10T23:23:20.417008460Z" level=info msg="connecting to shim 2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca" address="unix:///run/containerd/s/7316c88d8e9bb2443681e916e204766979684bd124b1e711cf7f8216140439f3" protocol=ttrpc version=3 Sep 10 23:23:20.445451 systemd[1]: Started cri-containerd-2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca.scope - libcontainer container 2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca. Sep 10 23:23:20.484295 containerd[1524]: time="2025-09-10T23:23:20.484227919Z" level=info msg="StartContainer for \"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" returns successfully" Sep 10 23:23:21.084298 kubelet[2672]: I0910 23:23:21.084176 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-6vtm5" podStartSLOduration=23.01979428 podStartE2EDuration="27.083945533s" podCreationTimestamp="2025-09-10 23:22:54 +0000 UTC" firstStartedPulling="2025-09-10 23:23:16.329294135 +0000 UTC m=+38.550247838" lastFinishedPulling="2025-09-10 23:23:20.393445388 +0000 UTC m=+42.614399091" observedRunningTime="2025-09-10 23:23:21.083908533 +0000 UTC m=+43.304862236" watchObservedRunningTime="2025-09-10 23:23:21.083945533 +0000 UTC m=+43.304899236" Sep 10 23:23:21.170767 kubelet[2672]: I0910 23:23:21.170725 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:21.186562 containerd[1524]: time="2025-09-10T23:23:21.186499263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" id:\"e5b3433946f0a146c5395f26ff90fd819f3b03758a061c3df4ccedb658944144\" pid:5331 exit_status:1 exited_at:{seconds:1757546601 nanos:179723745}" Sep 10 23:23:21.210942 containerd[1524]: time="2025-09-10T23:23:21.210896336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\" id:\"90ed78978e6ac6a46f69c47dd0bd2281b717fe37cf1c6916a4177ad21fe3c765\" pid:5359 exited_at:{seconds:1757546601 nanos:210620776}" Sep 10 23:23:21.259616 containerd[1524]: time="2025-09-10T23:23:21.259581401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\" id:\"24caa20f41123c7b9ae2f80132b73aafa574d9e7e69353ae4e83deaa11e74a5a\" pid:5382 exited_at:{seconds:1757546601 nanos:259335642}" Sep 10 23:23:21.281103 containerd[1524]: time="2025-09-10T23:23:21.281054675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:21.282455 containerd[1524]: time="2025-09-10T23:23:21.282427795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:23:21.283352 containerd[1524]: time="2025-09-10T23:23:21.283322354Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:21.286060 containerd[1524]: time="2025-09-10T23:23:21.286023154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:21.286436 containerd[1524]: time="2025-09-10T23:23:21.286414674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 892.359206ms" Sep 10 23:23:21.286482 containerd[1524]: time="2025-09-10T23:23:21.286442994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:23:21.288822 containerd[1524]: time="2025-09-10T23:23:21.288711753Z" level=info msg="CreateContainer within sandbox \"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:23:21.297288 containerd[1524]: time="2025-09-10T23:23:21.296446111Z" level=info msg="Container 7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:21.306566 containerd[1524]: time="2025-09-10T23:23:21.306514748Z" level=info msg="CreateContainer within sandbox \"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4\"" Sep 10 23:23:21.307365 containerd[1524]: time="2025-09-10T23:23:21.307337187Z" level=info msg="StartContainer for \"7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4\"" Sep 10 23:23:21.308785 containerd[1524]: time="2025-09-10T23:23:21.308758227Z" level=info msg="connecting to shim 7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4" address="unix:///run/containerd/s/2d6816c0966357c5d05c6026070a00e3dd75c292b705b7b0add957deefc4429c" protocol=ttrpc version=3 Sep 10 23:23:21.335505 systemd[1]: Started cri-containerd-7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4.scope - libcontainer container 7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4. Sep 10 23:23:21.367417 containerd[1524]: time="2025-09-10T23:23:21.367311450Z" level=info msg="StartContainer for \"7bea32d6f7def0d19e9d5bf3e3af8b165531389613a19ddaf124cfc8b1b6f7a4\" returns successfully" Sep 10 23:23:21.368629 containerd[1524]: time="2025-09-10T23:23:21.368573649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:23:22.142549 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:41648.service - OpenSSH per-connection server daemon (10.0.0.1:41648). Sep 10 23:23:22.171655 containerd[1524]: time="2025-09-10T23:23:22.171610017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" id:\"94ce8f1b38875e43df7b9a3fd3131832f5052a41e09fae9b9135908f148fddbb\" pid:5459 exit_status:1 exited_at:{seconds:1757546602 nanos:171176737}" Sep 10 23:23:22.230843 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 41648 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:22.232579 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:22.237750 systemd-logind[1504]: New session 9 of user core. Sep 10 23:23:22.248560 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:23:22.441784 containerd[1524]: time="2025-09-10T23:23:22.441590542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:22.444040 containerd[1524]: time="2025-09-10T23:23:22.443728382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:23:22.445968 containerd[1524]: time="2025-09-10T23:23:22.445607661Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:22.450883 containerd[1524]: time="2025-09-10T23:23:22.450840140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:22.453225 containerd[1524]: time="2025-09-10T23:23:22.453188339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.08445265s" Sep 10 23:23:22.453225 containerd[1524]: time="2025-09-10T23:23:22.453225819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:23:22.457208 containerd[1524]: time="2025-09-10T23:23:22.457167698Z" level=info msg="CreateContainer within sandbox \"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:23:22.477840 containerd[1524]: time="2025-09-10T23:23:22.475601013Z" level=info msg="Container 5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:22.488461 containerd[1524]: time="2025-09-10T23:23:22.488402929Z" level=info msg="CreateContainer within sandbox \"393fc42550adc469e5f993722805171e84f09341b599c7c6c5ef2ae25e0dcab2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012\"" Sep 10 23:23:22.489414 containerd[1524]: time="2025-09-10T23:23:22.489304889Z" level=info msg="StartContainer for \"5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012\"" Sep 10 23:23:22.490982 containerd[1524]: time="2025-09-10T23:23:22.490932249Z" level=info msg="connecting to shim 5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012" address="unix:///run/containerd/s/2d6816c0966357c5d05c6026070a00e3dd75c292b705b7b0add957deefc4429c" protocol=ttrpc version=3 Sep 10 23:23:22.527475 systemd[1]: Started cri-containerd-5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012.scope - libcontainer container 5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012. Sep 10 23:23:22.667373 containerd[1524]: time="2025-09-10T23:23:22.667324760Z" level=info msg="StartContainer for \"5c75c9a39abe14c72c075fad709e1277418e22fe81698de32cbf503472dd2012\" returns successfully" Sep 10 23:23:22.691730 sshd[5478]: Connection closed by 10.0.0.1 port 41648 Sep 10 23:23:22.692057 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:22.697025 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:41648.service: Deactivated successfully. Sep 10 23:23:22.702016 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:23:22.702967 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:23:22.704191 systemd-logind[1504]: Removed session 9. Sep 10 23:23:22.974269 kubelet[2672]: I0910 23:23:22.974119 2672 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:23:22.975668 kubelet[2672]: I0910 23:23:22.975523 2672 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:23:23.132896 kubelet[2672]: I0910 23:23:23.132787 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-59n6n" podStartSLOduration=23.124815947 podStartE2EDuration="28.132762354s" podCreationTimestamp="2025-09-10 23:22:55 +0000 UTC" firstStartedPulling="2025-09-10 23:23:17.446980292 +0000 UTC m=+39.667933995" lastFinishedPulling="2025-09-10 23:23:22.454926739 +0000 UTC m=+44.675880402" observedRunningTime="2025-09-10 23:23:23.132465074 +0000 UTC m=+45.353418777" watchObservedRunningTime="2025-09-10 23:23:23.132762354 +0000 UTC m=+45.353716097" Sep 10 23:23:23.163219 containerd[1524]: time="2025-09-10T23:23:23.162753986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" id:\"400f5cc081f0043f1bacff78f6221cf533ae8e5b8309b480f4eac356676ab25e\" pid:5560 exit_status:1 exited_at:{seconds:1757546603 nanos:161663547}" Sep 10 23:23:23.742476 kubelet[2672]: I0910 23:23:23.741962 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:23.742476 kubelet[2672]: E0910 23:23:23.742402 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:24.086935 kubelet[2672]: E0910 23:23:24.086811 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:24.684890 kubelet[2672]: I0910 23:23:24.684839 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:25.047670 systemd-networkd[1464]: vxlan.calico: Link UP Sep 10 23:23:25.047676 systemd-networkd[1464]: vxlan.calico: Gained carrier Sep 10 23:23:26.373908 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Sep 10 23:23:27.708068 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:41664.service - OpenSSH per-connection server daemon (10.0.0.1:41664). Sep 10 23:23:27.762865 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 41664 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:27.764375 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:27.768326 systemd-logind[1504]: New session 10 of user core. Sep 10 23:23:27.777452 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:23:28.025288 sshd[5727]: Connection closed by 10.0.0.1 port 41664 Sep 10 23:23:28.025728 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:28.039721 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:41664.service: Deactivated successfully. Sep 10 23:23:28.045836 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:23:28.048415 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:23:28.054740 systemd-logind[1504]: Removed session 10. Sep 10 23:23:28.058915 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:41678.service - OpenSSH per-connection server daemon (10.0.0.1:41678). Sep 10 23:23:28.127853 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 41678 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:28.129836 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:28.136494 systemd-logind[1504]: New session 11 of user core. Sep 10 23:23:28.142453 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:23:28.362767 sshd[5748]: Connection closed by 10.0.0.1 port 41678 Sep 10 23:23:28.363988 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:28.377538 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:41678.service: Deactivated successfully. Sep 10 23:23:28.380211 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:23:28.383013 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:23:28.387142 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:41680.service - OpenSSH per-connection server daemon (10.0.0.1:41680). Sep 10 23:23:28.388768 systemd-logind[1504]: Removed session 11. Sep 10 23:23:28.449747 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 41680 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:28.451210 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:28.455252 systemd-logind[1504]: New session 12 of user core. Sep 10 23:23:28.469438 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:23:28.646683 sshd[5768]: Connection closed by 10.0.0.1 port 41680 Sep 10 23:23:28.647094 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:28.650584 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:41680.service: Deactivated successfully. Sep 10 23:23:28.652410 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:23:28.653682 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:23:28.655197 systemd-logind[1504]: Removed session 12. Sep 10 23:23:33.669033 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:53406.service - OpenSSH per-connection server daemon (10.0.0.1:53406). Sep 10 23:23:33.732622 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 53406 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:33.734000 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:33.738757 systemd-logind[1504]: New session 13 of user core. Sep 10 23:23:33.750476 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:23:33.914196 sshd[5792]: Connection closed by 10.0.0.1 port 53406 Sep 10 23:23:33.915622 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:33.922436 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:53406.service: Deactivated successfully. Sep 10 23:23:33.925287 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:23:33.926510 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:23:33.928762 systemd-logind[1504]: Removed session 13. Sep 10 23:23:33.930922 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:53414.service - OpenSSH per-connection server daemon (10.0.0.1:53414). Sep 10 23:23:33.993801 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 53414 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:33.995233 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:33.998920 systemd-logind[1504]: New session 14 of user core. Sep 10 23:23:34.010413 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:23:34.253520 sshd[5808]: Connection closed by 10.0.0.1 port 53414 Sep 10 23:23:34.254237 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:34.265710 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:53414.service: Deactivated successfully. Sep 10 23:23:34.267339 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:23:34.267954 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:23:34.271490 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:53424.service - OpenSSH per-connection server daemon (10.0.0.1:53424). Sep 10 23:23:34.271984 systemd-logind[1504]: Removed session 14. Sep 10 23:23:34.321784 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 53424 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:34.323120 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:34.326976 systemd-logind[1504]: New session 15 of user core. Sep 10 23:23:34.337433 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:23:35.922506 sshd[5822]: Connection closed by 10.0.0.1 port 53424 Sep 10 23:23:35.923072 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:35.935747 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:53424.service: Deactivated successfully. Sep 10 23:23:35.939905 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:23:35.940119 systemd[1]: session-15.scope: Consumed 537ms CPU time, 74.9M memory peak. Sep 10 23:23:35.942526 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:23:35.946524 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Sep 10 23:23:35.949149 systemd-logind[1504]: Removed session 15. Sep 10 23:23:36.023828 sshd[5850]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:36.025741 sshd-session[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:36.029582 systemd-logind[1504]: New session 16 of user core. Sep 10 23:23:36.039444 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:23:36.366376 sshd[5854]: Connection closed by 10.0.0.1 port 53430 Sep 10 23:23:36.366731 sshd-session[5850]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:36.376827 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:53430.service: Deactivated successfully. Sep 10 23:23:36.378871 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:23:36.379871 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:23:36.382858 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:53440.service - OpenSSH per-connection server daemon (10.0.0.1:53440). Sep 10 23:23:36.385775 systemd-logind[1504]: Removed session 16. Sep 10 23:23:36.439389 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 53440 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:36.440748 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:36.444739 systemd-logind[1504]: New session 17 of user core. Sep 10 23:23:36.454437 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:23:36.577074 sshd[5869]: Connection closed by 10.0.0.1 port 53440 Sep 10 23:23:36.577480 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:36.582723 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:53440.service: Deactivated successfully. Sep 10 23:23:36.585137 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:23:36.586745 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:23:36.590799 systemd-logind[1504]: Removed session 17. Sep 10 23:23:37.818335 containerd[1524]: time="2025-09-10T23:23:37.818179325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" id:\"e29286c7995c334703d365603d5c9ecf1271b39fa855996822148bb9bb4e5f19\" pid:5894 exited_at:{seconds:1757546617 nanos:816401405}" Sep 10 23:23:39.634418 containerd[1524]: time="2025-09-10T23:23:39.634311749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa7456ca59460766ddf32544189e35892317f8b194b2e2215dce5b30c7331ca\" id:\"84cf74f9a69818eae098b69a6450e113f3583a534b5925be73a82c00595fbcdf\" pid:5921 exited_at:{seconds:1757546619 nanos:634007869}" Sep 10 23:23:41.592990 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:48818.service - OpenSSH per-connection server daemon (10.0.0.1:48818). Sep 10 23:23:41.672553 sshd[5935]: Accepted publickey for core from 10.0.0.1 port 48818 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:41.674483 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:41.681758 systemd-logind[1504]: New session 18 of user core. Sep 10 23:23:41.694659 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:23:41.853558 sshd[5938]: Connection closed by 10.0.0.1 port 48818 Sep 10 23:23:41.853917 sshd-session[5935]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:41.857599 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:48818.service: Deactivated successfully. Sep 10 23:23:41.860906 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:23:41.862062 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:23:41.863845 systemd-logind[1504]: Removed session 18. Sep 10 23:23:45.880918 kubelet[2672]: I0910 23:23:45.880877 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:45.939903 kubelet[2672]: I0910 23:23:45.939844 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:45.952370 containerd[1524]: time="2025-09-10T23:23:45.951770891Z" level=info msg="StopContainer for \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" with timeout 30 (s)" Sep 10 23:23:45.969349 containerd[1524]: time="2025-09-10T23:23:45.969251585Z" level=info msg="Stop container \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" with signal terminated" Sep 10 23:23:45.993650 systemd[1]: cri-containerd-ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c.scope: Deactivated successfully. Sep 10 23:23:45.994838 systemd[1]: cri-containerd-ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c.scope: Consumed 1.533s CPU time, 40.4M memory peak, 8K read from disk. Sep 10 23:23:45.995587 containerd[1524]: time="2025-09-10T23:23:45.995539367Z" level=info msg="received exit event container_id:\"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" id:\"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" pid:4544 exit_status:1 exited_at:{seconds:1757546625 nanos:995223044}" Sep 10 23:23:45.997100 containerd[1524]: time="2025-09-10T23:23:45.997054422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" id:\"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" pid:4544 exit_status:1 exited_at:{seconds:1757546625 nanos:995223044}" Sep 10 23:23:46.024442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c-rootfs.mount: Deactivated successfully. Sep 10 23:23:46.045359 containerd[1524]: time="2025-09-10T23:23:46.045310931Z" level=info msg="StopContainer for \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" returns successfully" Sep 10 23:23:46.047978 containerd[1524]: time="2025-09-10T23:23:46.047936796Z" level=info msg="StopPodSandbox for \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\"" Sep 10 23:23:46.053179 containerd[1524]: time="2025-09-10T23:23:46.053123727Z" level=info msg="Container to stop \"ff2aa9bb3c8f3e1fd7e0a68bcd20296efa59e82b5b2d6ec8badbdf5258aa235c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 23:23:46.059954 systemd[1]: cri-containerd-8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827.scope: Deactivated successfully. Sep 10 23:23:46.064930 containerd[1524]: time="2025-09-10T23:23:46.064891921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" id:\"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" pid:4309 exit_status:137 exited_at:{seconds:1757546626 nanos:64498157}" Sep 10 23:23:46.091673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827-rootfs.mount: Deactivated successfully. Sep 10 23:23:46.092229 containerd[1524]: time="2025-09-10T23:23:46.092187185Z" level=info msg="shim disconnected" id=8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827 namespace=k8s.io Sep 10 23:23:46.103437 containerd[1524]: time="2025-09-10T23:23:46.092229586Z" level=warning msg="cleaning up after shim disconnected" id=8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827 namespace=k8s.io Sep 10 23:23:46.103437 containerd[1524]: time="2025-09-10T23:23:46.103209492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 23:23:46.140189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827-shm.mount: Deactivated successfully. Sep 10 23:23:46.144100 containerd[1524]: time="2025-09-10T23:23:46.144005687Z" level=info msg="received exit event sandbox_id:\"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" exit_status:137 exited_at:{seconds:1757546626 nanos:64498157}" Sep 10 23:23:46.159216 kubelet[2672]: I0910 23:23:46.159180 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Sep 10 23:23:46.202413 systemd-networkd[1464]: cali7b0533e80ab: Link DOWN Sep 10 23:23:46.202420 systemd-networkd[1464]: cali7b0533e80ab: Lost carrier Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.199 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.200 [INFO][6036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" iface="eth0" netns="/var/run/netns/cni-b0451505-f7d7-bc50-4c21-e41a2ccd7be9" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.201 [INFO][6036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" iface="eth0" netns="/var/run/netns/cni-b0451505-f7d7-bc50-4c21-e41a2ccd7be9" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.209 [INFO][6036] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" after=8.587044ms iface="eth0" netns="/var/run/netns/cni-b0451505-f7d7-bc50-4c21-e41a2ccd7be9" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.209 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.209 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.230 [INFO][6051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.230 [INFO][6051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.230 [INFO][6051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.269 [INFO][6051] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.269 [INFO][6051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" HandleID="k8s-pod-network.8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Workload="localhost-k8s-calico--apiserver--9c84896fc--q6x8n-eth0" Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.271 [INFO][6051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:46.275644 containerd[1524]: 2025-09-10 23:23:46.274 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827" Sep 10 23:23:46.276420 containerd[1524]: time="2025-09-10T23:23:46.276386370Z" level=info msg="TearDown network for sandbox \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" successfully" Sep 10 23:23:46.276420 containerd[1524]: time="2025-09-10T23:23:46.276419251Z" level=info msg="StopPodSandbox for \"8341373b54302ffab9fbed114f292453ecadbe11c1189f4ae24aa997a1248827\" returns successfully" Sep 10 23:23:46.278779 systemd[1]: run-netns-cni\x2db0451505\x2df7d7\x2dbc50\x2d4c21\x2de41a2ccd7be9.mount: Deactivated successfully. Sep 10 23:23:46.434003 kubelet[2672]: I0910 23:23:46.433867 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gt8\" (UniqueName: \"kubernetes.io/projected/0bb26eb8-c7e6-4e42-9e76-36364255597b-kube-api-access-t5gt8\") pod \"0bb26eb8-c7e6-4e42-9e76-36364255597b\" (UID: \"0bb26eb8-c7e6-4e42-9e76-36364255597b\") " Sep 10 23:23:46.434003 kubelet[2672]: I0910 23:23:46.433933 2672 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bb26eb8-c7e6-4e42-9e76-36364255597b-calico-apiserver-certs\") pod \"0bb26eb8-c7e6-4e42-9e76-36364255597b\" (UID: \"0bb26eb8-c7e6-4e42-9e76-36364255597b\") " Sep 10 23:23:46.439174 kubelet[2672]: I0910 23:23:46.438878 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb26eb8-c7e6-4e42-9e76-36364255597b-kube-api-access-t5gt8" (OuterVolumeSpecName: "kube-api-access-t5gt8") pod "0bb26eb8-c7e6-4e42-9e76-36364255597b" (UID: "0bb26eb8-c7e6-4e42-9e76-36364255597b"). InnerVolumeSpecName "kube-api-access-t5gt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 23:23:46.439352 systemd[1]: var-lib-kubelet-pods-0bb26eb8\x2dc7e6\x2d4e42\x2d9e76\x2d36364255597b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5gt8.mount: Deactivated successfully. Sep 10 23:23:46.439606 systemd[1]: var-lib-kubelet-pods-0bb26eb8\x2dc7e6\x2d4e42\x2d9e76\x2d36364255597b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 10 23:23:46.441868 kubelet[2672]: I0910 23:23:46.441836 2672 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb26eb8-c7e6-4e42-9e76-36364255597b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "0bb26eb8-c7e6-4e42-9e76-36364255597b" (UID: "0bb26eb8-c7e6-4e42-9e76-36364255597b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 23:23:46.535481 kubelet[2672]: I0910 23:23:46.535412 2672 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bb26eb8-c7e6-4e42-9e76-36364255597b-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:46.535481 kubelet[2672]: I0910 23:23:46.535459 2672 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gt8\" (UniqueName: \"kubernetes.io/projected/0bb26eb8-c7e6-4e42-9e76-36364255597b-kube-api-access-t5gt8\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:46.870557 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:48820.service - OpenSSH per-connection server daemon (10.0.0.1:48820). Sep 10 23:23:46.937203 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 48820 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:46.939390 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:46.947702 systemd-logind[1504]: New session 19 of user core. Sep 10 23:23:46.957472 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:23:47.091049 sshd[6067]: Connection closed by 10.0.0.1 port 48820 Sep 10 23:23:47.091392 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:47.095325 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:48820.service: Deactivated successfully. Sep 10 23:23:47.097071 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:23:47.102185 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:23:47.103255 systemd-logind[1504]: Removed session 19. Sep 10 23:23:47.169862 systemd[1]: Removed slice kubepods-besteffort-pod0bb26eb8_c7e6_4e42_9e76_36364255597b.slice - libcontainer container kubepods-besteffort-pod0bb26eb8_c7e6_4e42_9e76_36364255597b.slice. Sep 10 23:23:47.170185 systemd[1]: kubepods-besteffort-pod0bb26eb8_c7e6_4e42_9e76_36364255597b.slice: Consumed 1.552s CPU time, 40.6M memory peak, 8K read from disk. Sep 10 23:23:47.880596 kubelet[2672]: I0910 23:23:47.880548 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb26eb8-c7e6-4e42-9e76-36364255597b" path="/var/lib/kubelet/pods/0bb26eb8-c7e6-4e42-9e76-36364255597b/volumes" Sep 10 23:23:49.092450 containerd[1524]: time="2025-09-10T23:23:49.092238273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a32b1f12729ea87d62329c46947e444d8b0aa1f2cf9b1b11a2b3ecd0a5b9731d\" id:\"32dd314ed610a16b55bb095aa2bb34cc963cc642585b969c8b75c66b3838f200\" pid:6096 exited_at:{seconds:1757546629 nanos:91669828}" Sep 10 23:23:49.877999 kubelet[2672]: E0910 23:23:49.877942 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:23:51.207052 containerd[1524]: time="2025-09-10T23:23:51.206984541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a77b9979f8bb60b8502b101cc15b8c259305cd2272164ca7ee5acbb135ac694\" id:\"43fb75b35de0e010c318567ee03b5f35ef0dcf3a6ab6744db5c41206da6bf0d7\" pid:6122 exited_at:{seconds:1757546631 nanos:206599618}" Sep 10 23:23:52.104089 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:46864.service - OpenSSH per-connection server daemon (10.0.0.1:46864). Sep 10 23:23:52.167574 sshd[6134]: Accepted publickey for core from 10.0.0.1 port 46864 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:52.169026 sshd-session[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:52.172823 systemd-logind[1504]: New session 20 of user core. Sep 10 23:23:52.181402 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:23:52.302271 sshd[6137]: Connection closed by 10.0.0.1 port 46864 Sep 10 23:23:52.302952 sshd-session[6134]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:52.306523 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:46864.service: Deactivated successfully. Sep 10 23:23:52.308415 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:23:52.309122 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:23:52.310236 systemd-logind[1504]: Removed session 20.