Sep 5 23:52:16.835863 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:52:16.835886 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:52:16.835896 kernel: KASLR enabled Sep 5 23:52:16.835901 kernel: efi: EFI v2.7 by EDK II Sep 5 23:52:16.835907 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 5 23:52:16.835913 kernel: random: crng init done Sep 5 23:52:16.835920 kernel: ACPI: Early table checksum verification disabled Sep 5 23:52:16.835926 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 5 23:52:16.835932 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 23:52:16.835939 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835946 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835952 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835958 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835964 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835971 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835979 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835985 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835992 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 23:52:16.835998 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 23:52:16.836004 kernel: NUMA: Failed to initialise from firmware Sep 5 23:52:16.836011 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:52:16.836017 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 5 23:52:16.836023 kernel: Zone ranges: Sep 5 23:52:16.836030 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:52:16.836036 kernel: DMA32 empty Sep 5 23:52:16.836043 kernel: Normal empty Sep 5 23:52:16.836050 kernel: Movable zone start for each node Sep 5 23:52:16.836056 kernel: Early memory node ranges Sep 5 23:52:16.836063 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 5 23:52:16.836069 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 5 23:52:16.836076 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 5 23:52:16.836082 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 23:52:16.836088 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 23:52:16.836095 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 23:52:16.836102 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 23:52:16.836108 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 23:52:16.836115 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 23:52:16.836122 kernel: psci: probing for conduit method from ACPI. Sep 5 23:52:16.836129 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:52:16.836136 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:52:16.836145 kernel: psci: Trusted OS migration not required Sep 5 23:52:16.836151 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:52:16.836159 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 23:52:16.836167 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:52:16.836174 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:52:16.836181 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 23:52:16.836187 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:52:16.836194 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:52:16.836201 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:52:16.836208 kernel: CPU features: detected: Spectre-v4 Sep 5 23:52:16.836215 kernel: CPU features: detected: Spectre-BHB Sep 5 23:52:16.836222 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:52:16.836228 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:52:16.836236 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:52:16.836243 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:52:16.836250 kernel: alternatives: applying boot alternatives Sep 5 23:52:16.836258 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:16.836265 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:52:16.836272 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:52:16.836279 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:52:16.836285 kernel: Fallback order for Node 0: 0 Sep 5 23:52:16.836292 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 23:52:16.836299 kernel: Policy zone: DMA Sep 5 23:52:16.836306 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:52:16.836313 kernel: software IO TLB: area num 4. Sep 5 23:52:16.836320 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 5 23:52:16.836327 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Sep 5 23:52:16.836334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 23:52:16.836341 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:52:16.836348 kernel: rcu: RCU event tracing is enabled. Sep 5 23:52:16.836355 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 23:52:16.836362 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:52:16.836369 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:52:16.836376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:52:16.836383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 23:52:16.836391 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:52:16.836398 kernel: GICv3: 256 SPIs implemented Sep 5 23:52:16.836404 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:52:16.836411 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:52:16.836418 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:52:16.836425 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 23:52:16.836431 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 23:52:16.836438 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:52:16.836445 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:52:16.836452 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 5 23:52:16.836459 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 5 23:52:16.836465 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:52:16.836508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:52:16.836515 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:52:16.836522 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:52:16.836529 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:52:16.836536 kernel: arm-pv: using stolen time PV Sep 5 23:52:16.836543 kernel: Console: colour dummy device 80x25 Sep 5 23:52:16.836549 kernel: ACPI: Core revision 20230628 Sep 5 23:52:16.836557 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:52:16.836564 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:52:16.836571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:52:16.836579 kernel: landlock: Up and running. Sep 5 23:52:16.836586 kernel: SELinux: Initializing. Sep 5 23:52:16.836593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:16.836600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:16.836607 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 23:52:16.836614 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 23:52:16.836622 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:52:16.836629 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:52:16.836640 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 23:52:16.836654 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 23:52:16.836660 kernel: Remapping and enabling EFI services. Sep 5 23:52:16.836667 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:52:16.836674 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:52:16.836681 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 23:52:16.836689 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 5 23:52:16.836702 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:52:16.836710 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:52:16.836717 kernel: Detected PIPT I-cache on CPU2 Sep 5 23:52:16.836724 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 23:52:16.836733 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 5 23:52:16.836740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:52:16.836752 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 23:52:16.836760 kernel: Detected PIPT I-cache on CPU3 Sep 5 23:52:16.836767 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 23:52:16.836774 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 5 23:52:16.836782 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:52:16.836789 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 23:52:16.836796 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 23:52:16.836805 kernel: SMP: Total of 4 processors activated. Sep 5 23:52:16.836812 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:52:16.836819 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:52:16.836827 kernel: CPU features: detected: Common not Private translations Sep 5 23:52:16.836834 kernel: CPU features: detected: CRC32 instructions Sep 5 23:52:16.836841 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 23:52:16.836848 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:52:16.836856 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:52:16.836865 kernel: CPU features: detected: Privileged Access Never Sep 5 23:52:16.836872 kernel: CPU features: detected: RAS Extension Support Sep 5 23:52:16.836879 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 23:52:16.836887 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:52:16.836894 kernel: alternatives: applying system-wide alternatives Sep 5 23:52:16.836901 kernel: devtmpfs: initialized Sep 5 23:52:16.836909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:52:16.836916 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 23:52:16.836923 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:52:16.836932 kernel: SMBIOS 3.0.0 present. Sep 5 23:52:16.836939 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 5 23:52:16.836947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:52:16.836955 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:52:16.836962 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:52:16.836970 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:52:16.836977 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:52:16.836984 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 5 23:52:16.836993 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:52:16.837000 kernel: cpuidle: using governor menu Sep 5 23:52:16.837007 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:52:16.837015 kernel: ASID allocator initialised with 32768 entries Sep 5 23:52:16.837022 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:52:16.837029 kernel: Serial: AMBA PL011 UART driver Sep 5 23:52:16.837037 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:52:16.837044 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:52:16.837052 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:52:16.837059 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:52:16.837068 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:52:16.837076 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:52:16.837083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:52:16.837090 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:52:16.837098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:52:16.837105 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:52:16.837112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:52:16.837119 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:52:16.837127 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:52:16.837135 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:52:16.837143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:52:16.837150 kernel: ACPI: Interpreter enabled Sep 5 23:52:16.837157 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:52:16.837164 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:52:16.837172 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:52:16.837179 kernel: printk: console [ttyAMA0] enabled Sep 5 23:52:16.837187 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 23:52:16.837339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:52:16.837416 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:52:16.837498 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:52:16.837567 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 23:52:16.837632 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 23:52:16.837642 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 23:52:16.837650 kernel: PCI host bridge to bus 0000:00 Sep 5 23:52:16.837732 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 23:52:16.837798 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:52:16.837862 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 23:52:16.837919 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 23:52:16.837998 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 23:52:16.838074 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 23:52:16.838161 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 23:52:16.838230 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 23:52:16.838297 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:52:16.838363 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 23:52:16.838429 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 23:52:16.838516 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 23:52:16.838577 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 23:52:16.838634 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:52:16.838703 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 23:52:16.838713 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:52:16.838721 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:52:16.838728 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:52:16.838736 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:52:16.838743 kernel: iommu: Default domain type: Translated Sep 5 23:52:16.838750 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:52:16.838758 kernel: efivars: Registered efivars operations Sep 5 23:52:16.838767 kernel: vgaarb: loaded Sep 5 23:52:16.838775 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:52:16.838782 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:52:16.838790 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:52:16.838797 kernel: pnp: PnP ACPI init Sep 5 23:52:16.838875 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 23:52:16.838886 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:52:16.838893 kernel: NET: Registered PF_INET protocol family Sep 5 23:52:16.838901 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:52:16.838910 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:52:16.838918 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:52:16.838925 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:52:16.838933 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:52:16.838940 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:52:16.838948 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:16.838955 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:16.838962 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:52:16.838971 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:52:16.838979 kernel: kvm [1]: HYP mode not available Sep 5 23:52:16.838986 kernel: Initialise system trusted keyrings Sep 5 23:52:16.838993 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:52:16.839000 kernel: Key type asymmetric registered Sep 5 23:52:16.839008 kernel: Asymmetric key parser 'x509' registered Sep 5 23:52:16.839015 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:52:16.839022 kernel: io scheduler mq-deadline registered Sep 5 23:52:16.839030 kernel: io scheduler kyber registered Sep 5 23:52:16.839037 kernel: io scheduler bfq registered Sep 5 23:52:16.839046 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:52:16.839053 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:52:16.839061 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:52:16.839127 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 23:52:16.839137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:52:16.839145 kernel: thunder_xcv, ver 1.0 Sep 5 23:52:16.839152 kernel: thunder_bgx, ver 1.0 Sep 5 23:52:16.839159 kernel: nicpf, ver 1.0 Sep 5 23:52:16.839166 kernel: nicvf, ver 1.0 Sep 5 23:52:16.839240 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:52:16.839303 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:52:16 UTC (1757116336) Sep 5 23:52:16.839313 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:52:16.839321 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 23:52:16.839328 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:52:16.839335 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:52:16.839343 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:52:16.839350 kernel: Segment Routing with IPv6 Sep 5 23:52:16.839360 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:52:16.839367 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:52:16.839374 kernel: Key type dns_resolver registered Sep 5 23:52:16.839381 kernel: registered taskstats version 1 Sep 5 23:52:16.839389 kernel: Loading compiled-in X.509 certificates Sep 5 23:52:16.839396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:52:16.839403 kernel: Key type .fscrypt registered Sep 5 23:52:16.839410 kernel: Key type fscrypt-provisioning registered Sep 5 23:52:16.839418 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:52:16.839427 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:52:16.839434 kernel: ima: No architecture policies found Sep 5 23:52:16.839442 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:52:16.839449 kernel: clk: Disabling unused clocks Sep 5 23:52:16.839456 kernel: Freeing unused kernel memory: 39424K Sep 5 23:52:16.839464 kernel: Run /init as init process Sep 5 23:52:16.839484 kernel: with arguments: Sep 5 23:52:16.839491 kernel: /init Sep 5 23:52:16.839498 kernel: with environment: Sep 5 23:52:16.839508 kernel: HOME=/ Sep 5 23:52:16.839515 kernel: TERM=linux Sep 5 23:52:16.839522 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:52:16.839531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:52:16.839541 systemd[1]: Detected virtualization kvm. Sep 5 23:52:16.839549 systemd[1]: Detected architecture arm64. Sep 5 23:52:16.839556 systemd[1]: Running in initrd. Sep 5 23:52:16.839565 systemd[1]: No hostname configured, using default hostname. Sep 5 23:52:16.839573 systemd[1]: Hostname set to . Sep 5 23:52:16.839581 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:52:16.839589 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:52:16.839597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:16.839605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:16.839613 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:52:16.839621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:52:16.839630 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:52:16.839638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:52:16.839648 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:52:16.839656 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:52:16.839664 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:16.839672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:16.839680 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:52:16.839689 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:52:16.839702 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:52:16.839711 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:52:16.839719 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:52:16.839727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:52:16.839735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:52:16.839743 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:52:16.839751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:16.839760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:16.839768 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:16.839776 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:52:16.839784 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:52:16.839792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:52:16.839800 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:52:16.839808 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:52:16.839816 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:52:16.839824 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:52:16.839833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:16.839841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:52:16.839849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:16.839857 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:52:16.839885 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 23:52:16.839906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:52:16.839915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:52:16.839922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:16.839932 systemd-journald[237]: Journal started Sep 5 23:52:16.839951 systemd-journald[237]: Runtime Journal (/run/log/journal/0ae48b09ba344964916057d2f52744c1) is 5.9M, max 47.3M, 41.4M free. Sep 5 23:52:16.824690 systemd-modules-load[239]: Inserted module 'overlay' Sep 5 23:52:16.841820 kernel: Bridge firewalling registered Sep 5 23:52:16.841838 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:52:16.840576 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 5 23:52:16.842701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:16.844230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:52:16.856608 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:16.858107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:52:16.859789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:52:16.863394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:52:16.871194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:16.875508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:16.877543 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:16.878581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:16.894828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:52:16.897254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:52:16.904156 dracut-cmdline[278]: dracut-dracut-053 Sep 5 23:52:16.906549 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:16.920800 systemd-resolved[282]: Positive Trust Anchors: Sep 5 23:52:16.920814 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:52:16.920846 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:52:16.925557 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 5 23:52:16.927332 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:52:16.930011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:16.969500 kernel: SCSI subsystem initialized Sep 5 23:52:16.973489 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:52:16.982526 kernel: iscsi: registered transport (tcp) Sep 5 23:52:16.993734 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:52:16.993759 kernel: QLogic iSCSI HBA Driver Sep 5 23:52:17.034768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:52:17.047612 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:52:17.063016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:52:17.063063 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:52:17.063085 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:52:17.108522 kernel: raid6: neonx8 gen() 15729 MB/s Sep 5 23:52:17.125504 kernel: raid6: neonx4 gen() 15603 MB/s Sep 5 23:52:17.142484 kernel: raid6: neonx2 gen() 13160 MB/s Sep 5 23:52:17.159483 kernel: raid6: neonx1 gen() 10507 MB/s Sep 5 23:52:17.176484 kernel: raid6: int64x8 gen() 6941 MB/s Sep 5 23:52:17.193497 kernel: raid6: int64x4 gen() 7343 MB/s Sep 5 23:52:17.210484 kernel: raid6: int64x2 gen() 6117 MB/s Sep 5 23:52:17.227494 kernel: raid6: int64x1 gen() 5039 MB/s Sep 5 23:52:17.227518 kernel: raid6: using algorithm neonx8 gen() 15729 MB/s Sep 5 23:52:17.244497 kernel: raid6: .... xor() 12027 MB/s, rmw enabled Sep 5 23:52:17.244522 kernel: raid6: using neon recovery algorithm Sep 5 23:52:17.249485 kernel: xor: measuring software checksum speed Sep 5 23:52:17.249500 kernel: 8regs : 19750 MB/sec Sep 5 23:52:17.250482 kernel: 32regs : 18007 MB/sec Sep 5 23:52:17.250495 kernel: arm64_neon : 27043 MB/sec Sep 5 23:52:17.250504 kernel: xor: using function: arm64_neon (27043 MB/sec) Sep 5 23:52:17.299511 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:52:17.310175 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:52:17.322653 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:17.333667 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 5 23:52:17.336768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:17.346616 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:52:17.357819 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 5 23:52:17.382838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:52:17.393596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:52:17.433120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:17.442619 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:52:17.455877 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:52:17.457181 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:52:17.459552 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:17.461567 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:52:17.467608 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:52:17.477236 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:52:17.486503 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 23:52:17.486882 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 23:52:17.489860 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:52:17.489897 kernel: GPT:9289727 != 19775487 Sep 5 23:52:17.489908 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:52:17.489918 kernel: GPT:9289727 != 19775487 Sep 5 23:52:17.490600 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:52:17.491494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:52:17.495212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:52:17.495384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:17.499129 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:17.500260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:52:17.500320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:17.502596 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:17.510514 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (507) Sep 5 23:52:17.510613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:17.513332 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (513) Sep 5 23:52:17.523522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:17.528352 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 23:52:17.533180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 23:52:17.540056 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 23:52:17.541004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 23:52:17.546336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 23:52:17.557666 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:52:17.559162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:17.565092 disk-uuid[551]: Primary Header is updated. Sep 5 23:52:17.565092 disk-uuid[551]: Secondary Entries is updated. Sep 5 23:52:17.565092 disk-uuid[551]: Secondary Header is updated. Sep 5 23:52:17.569502 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:52:17.573501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:52:17.574835 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:17.577484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:52:18.579493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 23:52:18.579546 disk-uuid[554]: The operation has completed successfully. Sep 5 23:52:18.600676 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:52:18.600777 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:52:18.624618 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:52:18.627404 sh[574]: Success Sep 5 23:52:18.636486 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:52:18.674050 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:52:18.675520 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:52:18.676222 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:52:18.685879 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:52:18.685917 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:18.686764 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:52:18.686792 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:52:18.687817 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:52:18.691146 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:52:18.692226 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:52:18.692892 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:52:18.695104 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:52:18.704076 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:18.704110 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:18.704121 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:52:18.706497 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:52:18.714198 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:52:18.715495 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:18.720549 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:52:18.725599 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:52:18.780529 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:52:18.789651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:52:18.790755 ignition[663]: Ignition 2.19.0 Sep 5 23:52:18.790761 ignition[663]: Stage: fetch-offline Sep 5 23:52:18.790792 ignition[663]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:18.790799 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:18.790953 ignition[663]: parsed url from cmdline: "" Sep 5 23:52:18.790955 ignition[663]: no config URL provided Sep 5 23:52:18.790960 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:52:18.790967 ignition[663]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:52:18.791010 ignition[663]: op(1): [started] loading QEMU firmware config module Sep 5 23:52:18.791015 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 23:52:18.796634 ignition[663]: op(1): [finished] loading QEMU firmware config module Sep 5 23:52:18.808831 systemd-networkd[763]: lo: Link UP Sep 5 23:52:18.808840 systemd-networkd[763]: lo: Gained carrier Sep 5 23:52:18.809489 systemd-networkd[763]: Enumeration completed Sep 5 23:52:18.809560 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:52:18.809903 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:18.809906 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:52:18.810699 systemd-networkd[763]: eth0: Link UP Sep 5 23:52:18.810702 systemd-networkd[763]: eth0: Gained carrier Sep 5 23:52:18.810709 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:18.811055 systemd[1]: Reached target network.target - Network. Sep 5 23:52:18.827503 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:52:18.846795 ignition[663]: parsing config with SHA512: 706863282c33cdcec9f895ba053a7363308c1241f2e2195c3f9fc822f143774ead97ba6a4fbd65fb761fe94c6411eeb2c12be3adc7d0527e1b0dfd31a0935d9c Sep 5 23:52:18.852501 unknown[663]: fetched base config from "system" Sep 5 23:52:18.852512 unknown[663]: fetched user config from "qemu" Sep 5 23:52:18.852908 ignition[663]: fetch-offline: fetch-offline passed Sep 5 23:52:18.852969 ignition[663]: Ignition finished successfully Sep 5 23:52:18.854831 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:52:18.856241 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 23:52:18.868597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:52:18.878372 ignition[770]: Ignition 2.19.0 Sep 5 23:52:18.878381 ignition[770]: Stage: kargs Sep 5 23:52:18.878590 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:18.878600 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:18.880006 ignition[770]: kargs: kargs passed Sep 5 23:52:18.880068 ignition[770]: Ignition finished successfully Sep 5 23:52:18.881821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:52:18.883430 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:52:18.895398 ignition[778]: Ignition 2.19.0 Sep 5 23:52:18.895406 ignition[778]: Stage: disks Sep 5 23:52:18.895586 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:18.895595 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:18.896438 ignition[778]: disks: disks passed Sep 5 23:52:18.897817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:52:18.896498 ignition[778]: Ignition finished successfully Sep 5 23:52:18.898778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:52:18.899932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:52:18.901416 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:52:18.902661 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:52:18.904078 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:52:18.915626 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:52:18.924591 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 23:52:18.928012 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:52:18.931036 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:52:18.972482 kernel: EXT4-fs (vda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:52:18.972835 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:52:18.973814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:52:18.984538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:52:18.985953 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:52:18.986953 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 23:52:18.987025 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:52:18.987076 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:52:18.992627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:52:18.994491 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 5 23:52:18.994603 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:52:18.998509 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:18.998526 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:18.998537 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:52:18.999490 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:52:19.000975 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:52:19.028586 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:52:19.032624 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:52:19.036424 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:52:19.039671 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:52:19.101813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:52:19.114587 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:52:19.115975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:52:19.120483 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:19.134287 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:52:19.135721 ignition[912]: INFO : Ignition 2.19.0 Sep 5 23:52:19.135721 ignition[912]: INFO : Stage: mount Sep 5 23:52:19.136916 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:19.136916 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:19.136916 ignition[912]: INFO : mount: mount passed Sep 5 23:52:19.136916 ignition[912]: INFO : Ignition finished successfully Sep 5 23:52:19.137952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:52:19.149580 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:52:19.685353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:52:19.697642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:52:19.703332 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (924) Sep 5 23:52:19.703364 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:19.703375 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:19.704547 kernel: BTRFS info (device vda6): using free space tree Sep 5 23:52:19.706477 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 23:52:19.707519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:52:19.722445 ignition[941]: INFO : Ignition 2.19.0 Sep 5 23:52:19.722445 ignition[941]: INFO : Stage: files Sep 5 23:52:19.723956 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:19.723956 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:19.723956 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:52:19.726812 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:52:19.726812 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:52:19.726812 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:52:19.726812 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:52:19.726812 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:52:19.726224 unknown[941]: wrote ssh authorized keys file for user: core Sep 5 23:52:19.732387 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:52:19.732387 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 23:52:19.781305 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:52:20.007290 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 23:52:20.007290 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:20.010366 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 23:52:20.343769 systemd-networkd[763]: eth0: Gained IPv6LL Sep 5 23:52:20.567597 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:52:21.093675 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 23:52:21.095666 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 23:52:21.117774 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:52:21.121746 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 23:52:21.124096 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 23:52:21.124096 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:52:21.124096 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:52:21.124096 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:52:21.124096 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:52:21.124096 ignition[941]: INFO : files: files passed Sep 5 23:52:21.124096 ignition[941]: INFO : Ignition finished successfully Sep 5 23:52:21.124629 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:52:21.138610 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:52:21.140753 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:52:21.143211 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:52:21.143287 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:52:21.149077 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 23:52:21.152456 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:21.152456 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:21.155135 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:52:21.157536 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:52:21.158937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:52:21.176611 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:52:21.195594 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:52:21.196372 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:52:21.197575 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:52:21.198336 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:52:21.200599 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:52:21.201375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:52:21.216193 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:52:21.223653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:52:21.231042 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:21.232086 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:21.233675 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:52:21.235190 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:52:21.235301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:52:21.237267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:52:21.238935 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:52:21.240254 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:52:21.241568 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:52:21.243447 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:52:21.245062 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:52:21.246454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:52:21.248054 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:52:21.249534 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:52:21.251051 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:52:21.252179 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:52:21.252290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:52:21.254130 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:21.255785 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:21.257296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:52:21.260502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:21.261448 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:52:21.261576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:52:21.263936 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:52:21.264052 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:52:21.265513 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:52:21.266742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:52:21.272525 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:21.273533 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:52:21.275190 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:52:21.276446 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:52:21.276556 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:52:21.277823 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:52:21.277902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:52:21.279149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:52:21.279255 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:52:21.280552 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:52:21.280662 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:52:21.291630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:52:21.292300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:52:21.292416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:21.294792 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:52:21.296219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:52:21.296343 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:21.297778 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:52:21.297923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:52:21.302900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:52:21.302997 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:52:21.306356 ignition[995]: INFO : Ignition 2.19.0 Sep 5 23:52:21.306356 ignition[995]: INFO : Stage: umount Sep 5 23:52:21.307884 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:21.307884 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 23:52:21.307884 ignition[995]: INFO : umount: umount passed Sep 5 23:52:21.307884 ignition[995]: INFO : Ignition finished successfully Sep 5 23:52:21.309135 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:52:21.309290 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:52:21.313396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:52:21.313838 systemd[1]: Stopped target network.target - Network. Sep 5 23:52:21.314949 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:52:21.315005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:52:21.316223 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:52:21.316260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:52:21.317684 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:52:21.317727 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:52:21.319143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:52:21.319183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:52:21.320654 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:52:21.322269 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:52:21.329545 systemd-networkd[763]: eth0: DHCPv6 lease lost Sep 5 23:52:21.331083 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:52:21.331206 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:52:21.333007 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:52:21.333037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:21.341628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:52:21.342296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:52:21.342346 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:52:21.343999 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:21.345611 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:52:21.346195 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:52:21.350750 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:52:21.350808 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:21.351744 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:52:21.351786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:21.353136 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:52:21.353171 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:21.356577 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:52:21.356659 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:52:21.360229 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:52:21.360362 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:21.361794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:52:21.361833 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:21.365870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:52:21.365904 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:21.367337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:52:21.367381 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:52:21.370445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:52:21.370506 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:52:21.372938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:52:21.372982 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:21.394623 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:52:21.395401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:52:21.395452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:21.397246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:52:21.397284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:21.398965 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:52:21.399046 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:52:21.400381 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:52:21.400485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:52:21.402368 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:52:21.403297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:52:21.403357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:52:21.405320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:52:21.414216 systemd[1]: Switching root. Sep 5 23:52:21.441447 systemd-journald[237]: Journal stopped Sep 5 23:52:22.091295 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 23:52:22.091349 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:52:22.091364 kernel: SELinux: policy capability open_perms=1 Sep 5 23:52:22.091374 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:52:22.091384 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:52:22.091393 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:52:22.091403 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:52:22.091413 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:52:22.091424 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:52:22.091434 kernel: audit: type=1403 audit(1757116341.580:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:52:22.091444 systemd[1]: Successfully loaded SELinux policy in 30.792ms. Sep 5 23:52:22.091465 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.402ms. Sep 5 23:52:22.091497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:52:22.091508 systemd[1]: Detected virtualization kvm. Sep 5 23:52:22.091519 systemd[1]: Detected architecture arm64. Sep 5 23:52:22.091529 systemd[1]: Detected first boot. Sep 5 23:52:22.091539 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:52:22.091549 zram_generator::config[1040]: No configuration found. Sep 5 23:52:22.091560 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:52:22.091571 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:52:22.091584 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:52:22.091594 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:52:22.091606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:52:22.091616 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:52:22.091627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:52:22.091637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:52:22.091647 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:52:22.091658 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:52:22.091680 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:52:22.091695 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:52:22.091706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:22.091716 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:22.091727 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:52:22.091737 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:52:22.091748 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:52:22.091758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:52:22.091769 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:52:22.091782 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:22.091793 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:52:22.091803 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:52:22.091814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:52:22.091824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:52:22.091835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:22.091846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:52:22.091856 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:52:22.091869 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:52:22.091880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:52:22.091903 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:52:22.091914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:22.091926 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:22.091938 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:22.091950 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:52:22.091960 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:52:22.091982 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:52:22.091994 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:52:22.092006 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:52:22.092017 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:52:22.092028 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:52:22.092040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:52:22.092050 systemd[1]: Reached target machines.target - Containers. Sep 5 23:52:22.092060 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:52:22.092071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:52:22.092084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:52:22.092095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:52:22.092106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:52:22.092116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:52:22.092127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:52:22.092137 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:52:22.092148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:52:22.092163 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:52:22.092173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:52:22.092186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:52:22.092197 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:52:22.092207 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:52:22.092217 kernel: fuse: init (API version 7.39) Sep 5 23:52:22.092227 kernel: loop: module loaded Sep 5 23:52:22.092236 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:52:22.092247 kernel: ACPI: bus type drm_connector registered Sep 5 23:52:22.092257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:52:22.092267 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:52:22.092280 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:52:22.092290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:52:22.092300 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:52:22.092310 systemd[1]: Stopped verity-setup.service. Sep 5 23:52:22.092338 systemd-journald[1107]: Collecting audit messages is disabled. Sep 5 23:52:22.092359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:52:22.092369 systemd-journald[1107]: Journal started Sep 5 23:52:22.092392 systemd-journald[1107]: Runtime Journal (/run/log/journal/0ae48b09ba344964916057d2f52744c1) is 5.9M, max 47.3M, 41.4M free. Sep 5 23:52:21.912304 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:52:21.932395 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 23:52:21.932755 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:52:22.095533 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:52:22.096101 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:52:22.097187 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:52:22.098100 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:52:22.099075 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:52:22.100042 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:52:22.101029 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:52:22.102216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:22.103461 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:52:22.103603 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:52:22.104739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:52:22.104874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:52:22.106014 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:52:22.106137 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:52:22.107233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:52:22.107370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:52:22.108645 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:52:22.108791 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:52:22.110031 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:52:22.110155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:52:22.111298 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:22.112455 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:52:22.113835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:52:22.125760 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:52:22.136554 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:52:22.138358 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:52:22.139303 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:52:22.139342 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:52:22.141051 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:52:22.142988 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:52:22.144812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:52:22.145689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:52:22.146902 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:52:22.149733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:52:22.150866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:52:22.151811 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:52:22.152766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:52:22.156598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:52:22.160808 systemd-journald[1107]: Time spent on flushing to /var/log/journal/0ae48b09ba344964916057d2f52744c1 is 36.072ms for 853 entries. Sep 5 23:52:22.160808 systemd-journald[1107]: System Journal (/var/log/journal/0ae48b09ba344964916057d2f52744c1) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:52:22.212011 systemd-journald[1107]: Received client request to flush runtime journal. Sep 5 23:52:22.212059 kernel: loop0: detected capacity change from 0 to 114328 Sep 5 23:52:22.212076 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:52:22.161739 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:52:22.164792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:52:22.168930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:22.170167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:52:22.171456 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:52:22.174566 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:52:22.175922 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:52:22.181283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:52:22.189663 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:52:22.196534 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:52:22.199517 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:22.202445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:52:22.206380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:52:22.207945 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:52:22.214123 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:52:22.218396 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:52:22.219815 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:52:22.228506 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 23:52:22.235877 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 5 23:52:22.236247 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Sep 5 23:52:22.240964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:22.255125 kernel: loop2: detected capacity change from 0 to 114432 Sep 5 23:52:22.285846 kernel: loop3: detected capacity change from 0 to 114328 Sep 5 23:52:22.288489 kernel: loop4: detected capacity change from 0 to 203944 Sep 5 23:52:22.293508 kernel: loop5: detected capacity change from 0 to 114432 Sep 5 23:52:22.296152 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 23:52:22.296524 (sd-merge)[1175]: Merged extensions into '/usr'. Sep 5 23:52:22.301604 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:52:22.301619 systemd[1]: Reloading... Sep 5 23:52:22.351508 zram_generator::config[1202]: No configuration found. Sep 5 23:52:22.430583 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:52:22.454235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:22.490132 systemd[1]: Reloading finished in 188 ms. Sep 5 23:52:22.518505 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:52:22.519639 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:52:22.530622 systemd[1]: Starting ensure-sysext.service... Sep 5 23:52:22.532659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:52:22.537451 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:52:22.537593 systemd[1]: Reloading... Sep 5 23:52:22.549520 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:52:22.549786 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:52:22.550437 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:52:22.550661 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 5 23:52:22.550720 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 5 23:52:22.553127 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:52:22.553140 systemd-tmpfiles[1237]: Skipping /boot Sep 5 23:52:22.560167 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:52:22.560183 systemd-tmpfiles[1237]: Skipping /boot Sep 5 23:52:22.583511 zram_generator::config[1265]: No configuration found. Sep 5 23:52:22.666354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:22.703958 systemd[1]: Reloading finished in 166 ms. Sep 5 23:52:22.719404 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:52:22.734046 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:22.741310 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:52:22.743966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:52:22.746240 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:52:22.751800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:52:22.756673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:22.758702 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:52:22.763322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:52:22.764637 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:52:22.768510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:52:22.773311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:52:22.774381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:52:22.777103 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:52:22.779486 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:52:22.783004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:52:22.783163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:52:22.783961 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Sep 5 23:52:22.784644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:52:22.784807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:52:22.789076 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:52:22.789231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:52:22.794291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:52:22.799937 augenrules[1330]: No rules Sep 5 23:52:22.802840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:52:22.806691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:52:22.809292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:52:22.810963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:52:22.814403 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:52:22.818102 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:22.819921 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:52:22.823732 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:52:22.825183 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:52:22.828484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:52:22.828636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:52:22.830707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:52:22.830843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:52:22.832213 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:52:22.832332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:52:22.836126 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:52:22.847870 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:52:22.861524 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1353) Sep 5 23:52:22.867788 systemd[1]: Finished ensure-sysext.service. Sep 5 23:52:22.870809 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:52:22.887529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:52:22.891730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:52:22.895644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:52:22.898411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:52:22.901530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:52:22.903619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:52:22.905076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:52:22.908621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 23:52:22.909441 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:52:22.909904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:52:22.910046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:52:22.911198 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:52:22.911412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:52:22.912571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:52:22.912696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:52:22.913825 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:52:22.913953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:52:22.916010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 23:52:22.921752 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:52:22.922682 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:52:22.922746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:52:22.938986 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:52:22.950720 systemd-resolved[1305]: Positive Trust Anchors: Sep 5 23:52:22.950738 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:52:22.950770 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:52:22.962347 systemd-resolved[1305]: Defaulting to hostname 'linux'. Sep 5 23:52:22.968291 systemd-networkd[1380]: lo: Link UP Sep 5 23:52:22.968298 systemd-networkd[1380]: lo: Gained carrier Sep 5 23:52:22.969722 systemd-networkd[1380]: Enumeration completed Sep 5 23:52:22.970769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:22.972259 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:52:22.973420 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:52:22.974446 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:22.974458 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:52:22.975135 systemd-networkd[1380]: eth0: Link UP Sep 5 23:52:22.975146 systemd-networkd[1380]: eth0: Gained carrier Sep 5 23:52:22.975159 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:22.979968 systemd[1]: Reached target network.target - Network. Sep 5 23:52:22.980729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:22.987638 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:52:22.988643 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 23:52:22.990018 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:52:22.991784 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:52:22.993693 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 23:52:22.994545 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Sep 5 23:52:22.994555 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:52:22.998628 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 23:52:22.998696 systemd-timesyncd[1381]: Initial clock synchronization to Fri 2025-09-05 23:52:22.840195 UTC. Sep 5 23:52:23.007318 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:52:23.016498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:23.055505 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:52:23.056979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:23.057883 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:52:23.058717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:52:23.059601 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:52:23.060633 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:52:23.061475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:52:23.062329 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:52:23.063385 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:52:23.063429 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:52:23.064137 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:52:23.065623 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:52:23.067565 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:52:23.076347 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:52:23.078300 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:52:23.079644 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:52:23.080511 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:52:23.081193 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:52:23.081936 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:52:23.081964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:52:23.082820 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:52:23.084572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:52:23.085877 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:52:23.088652 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:52:23.092700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:52:23.093453 jq[1408]: false Sep 5 23:52:23.093981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:52:23.095586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:52:23.098986 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:52:23.104152 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:52:23.107179 extend-filesystems[1409]: Found loop3 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found loop4 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found loop5 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda1 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda2 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda3 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found usr Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda4 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda6 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda7 Sep 5 23:52:23.111006 extend-filesystems[1409]: Found vda9 Sep 5 23:52:23.111006 extend-filesystems[1409]: Checking size of /dev/vda9 Sep 5 23:52:23.135568 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 23:52:23.135599 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1343) Sep 5 23:52:23.110367 dbus-daemon[1407]: [system] SELinux support is enabled Sep 5 23:52:23.108050 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:52:23.141186 extend-filesystems[1409]: Resized partition /dev/vda9 Sep 5 23:52:23.111679 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:52:23.143344 extend-filesystems[1427]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:52:23.113212 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:52:23.113625 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:52:23.115392 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:52:23.144541 jq[1426]: true Sep 5 23:52:23.118602 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:52:23.119918 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:52:23.125979 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:52:23.139877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:52:23.140045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:52:23.140305 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:52:23.140433 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:52:23.142548 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:52:23.142688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:52:23.150490 update_engine[1423]: I20250905 23:52:23.150090 1423 main.cc:92] Flatcar Update Engine starting Sep 5 23:52:23.157224 update_engine[1423]: I20250905 23:52:23.156330 1423 update_check_scheduler.cc:74] Next update check in 2m55s Sep 5 23:52:23.162345 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:52:23.162658 systemd-logind[1422]: New seat seat0. Sep 5 23:52:23.163959 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:52:23.166483 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 23:52:23.166811 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:52:23.169514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:52:23.169663 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:52:23.173621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:52:23.178283 jq[1434]: true Sep 5 23:52:23.173726 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:52:23.194176 extend-filesystems[1427]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 23:52:23.194176 extend-filesystems[1427]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:52:23.194176 extend-filesystems[1427]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 23:52:23.191783 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:52:23.197638 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Sep 5 23:52:23.193690 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:52:23.194130 (ntainerd)[1435]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:52:23.194225 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:52:23.207542 tar[1433]: linux-arm64/helm Sep 5 23:52:23.236127 locksmithd[1445]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:52:23.252141 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:52:23.252854 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:52:23.254379 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 23:52:23.337730 containerd[1435]: time="2025-09-05T23:52:23.337543911Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:52:23.364759 containerd[1435]: time="2025-09-05T23:52:23.364532344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.365876872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.365907220Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.365922276Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366066366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366083186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366133216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366149213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366325689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366341137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366354625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:52:23.366989 containerd[1435]: time="2025-09-05T23:52:23.366363603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366439706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366652293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366750510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366762900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366847786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:52:23.367235 containerd[1435]: time="2025-09-05T23:52:23.366887739Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:52:23.370890 containerd[1435]: time="2025-09-05T23:52:23.370858669Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:52:23.370952 containerd[1435]: time="2025-09-05T23:52:23.370906896Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:52:23.370952 containerd[1435]: time="2025-09-05T23:52:23.370923128Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:52:23.370952 containerd[1435]: time="2025-09-05T23:52:23.370937321Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:52:23.370952 containerd[1435]: time="2025-09-05T23:52:23.370951867Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:52:23.371111 containerd[1435]: time="2025-09-05T23:52:23.371092194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:52:23.371327 containerd[1435]: time="2025-09-05T23:52:23.371310623Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:52:23.371425 containerd[1435]: time="2025-09-05T23:52:23.371408290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:52:23.371449 containerd[1435]: time="2025-09-05T23:52:23.371428443Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:52:23.371487 containerd[1435]: time="2025-09-05T23:52:23.371441304Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:52:23.371487 containerd[1435]: time="2025-09-05T23:52:23.371464593Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371521 containerd[1435]: time="2025-09-05T23:52:23.371490667Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371521 containerd[1435]: time="2025-09-05T23:52:23.371502978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371521 containerd[1435]: time="2025-09-05T23:52:23.371515995Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371577 containerd[1435]: time="2025-09-05T23:52:23.371529248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371577 containerd[1435]: time="2025-09-05T23:52:23.371540932Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371577 containerd[1435]: time="2025-09-05T23:52:23.371552067Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371577 containerd[1435]: time="2025-09-05T23:52:23.371569593Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:52:23.371642 containerd[1435]: time="2025-09-05T23:52:23.371590961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371642 containerd[1435]: time="2025-09-05T23:52:23.371604371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371642 containerd[1435]: time="2025-09-05T23:52:23.371617035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371642 containerd[1435]: time="2025-09-05T23:52:23.371628327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371642 containerd[1435]: time="2025-09-05T23:52:23.371639462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371652244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371664242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371676279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371688119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371701764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371712664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371731 containerd[1435]: time="2025-09-05T23:52:23.371723564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371842 containerd[1435]: time="2025-09-05T23:52:23.371737679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371842 containerd[1435]: time="2025-09-05T23:52:23.371753205Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:52:23.371842 containerd[1435]: time="2025-09-05T23:52:23.371771868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371842 containerd[1435]: time="2025-09-05T23:52:23.371787983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.371842 containerd[1435]: time="2025-09-05T23:52:23.371798451Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:52:23.372501 containerd[1435]: time="2025-09-05T23:52:23.372410413Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:52:23.372501 containerd[1435]: time="2025-09-05T23:52:23.372442525Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:52:23.372501 containerd[1435]: time="2025-09-05T23:52:23.372463776Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:52:23.372501 containerd[1435]: time="2025-09-05T23:52:23.372487693Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:52:23.372501 containerd[1435]: time="2025-09-05T23:52:23.372497887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.372604 containerd[1435]: time="2025-09-05T23:52:23.372510590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:52:23.372604 containerd[1435]: time="2025-09-05T23:52:23.372527960Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:52:23.372604 containerd[1435]: time="2025-09-05T23:52:23.372540898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:52:23.372914 containerd[1435]: time="2025-09-05T23:52:23.372858171Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:52:23.372914 containerd[1435]: time="2025-09-05T23:52:23.372918826Z" level=info msg="Connect containerd service" Sep 5 23:52:23.373039 containerd[1435]: time="2025-09-05T23:52:23.372946507Z" level=info msg="using legacy CRI server" Sep 5 23:52:23.373039 containerd[1435]: time="2025-09-05T23:52:23.372953643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:52:23.373073 containerd[1435]: time="2025-09-05T23:52:23.373057388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:52:23.373708 containerd[1435]: time="2025-09-05T23:52:23.373680799Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:52:23.374083 containerd[1435]: time="2025-09-05T23:52:23.373951571Z" level=info msg="Start subscribing containerd event" Sep 5 23:52:23.374083 containerd[1435]: time="2025-09-05T23:52:23.374009952Z" level=info msg="Start recovering state" Sep 5 23:52:23.374172 containerd[1435]: time="2025-09-05T23:52:23.374153572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:52:23.374349 containerd[1435]: time="2025-09-05T23:52:23.374333459Z" level=info msg="Start event monitor" Sep 5 23:52:23.374408 containerd[1435]: time="2025-09-05T23:52:23.374397094Z" level=info msg="Start snapshots syncer" Sep 5 23:52:23.374477 containerd[1435]: time="2025-09-05T23:52:23.374442576Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:52:23.374588 containerd[1435]: time="2025-09-05T23:52:23.374455240Z" level=info msg="Start streaming server" Sep 5 23:52:23.374652 containerd[1435]: time="2025-09-05T23:52:23.374371883Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:52:23.375349 containerd[1435]: time="2025-09-05T23:52:23.374751419Z" level=info msg="containerd successfully booted in 0.038028s" Sep 5 23:52:23.374838 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:52:23.527678 tar[1433]: linux-arm64/LICENSE Sep 5 23:52:23.527678 tar[1433]: linux-arm64/README.md Sep 5 23:52:23.550116 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:52:23.671078 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:52:23.695535 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:52:23.715301 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:52:23.720254 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:52:23.720409 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:52:23.723278 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:52:23.733918 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:52:23.736281 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:52:23.739725 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:52:23.740688 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:52:24.951772 systemd-networkd[1380]: eth0: Gained IPv6LL Sep 5 23:52:24.956509 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:52:24.957855 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:52:24.977061 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 23:52:24.979111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:24.981810 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:52:25.003128 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:52:25.004591 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 23:52:25.004748 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 23:52:25.009176 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:52:25.505498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:25.506911 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:52:25.509490 systemd[1]: Startup finished in 493ms (kernel) + 4.903s (initrd) + 3.959s (userspace) = 9.357s. Sep 5 23:52:25.510048 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:25.855963 kubelet[1520]: E0905 23:52:25.855872 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:25.858747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:25.858890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:29.363128 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:52:29.364223 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:41578.service - OpenSSH per-connection server daemon (10.0.0.1:41578). Sep 5 23:52:29.411338 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 41578 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:29.412842 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:29.419704 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:52:29.429810 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:52:29.431236 systemd-logind[1422]: New session 1 of user core. Sep 5 23:52:29.438078 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:52:29.441457 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:52:29.447065 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:52:29.523016 systemd[1538]: Queued start job for default target default.target. Sep 5 23:52:29.535263 systemd[1538]: Created slice app.slice - User Application Slice. Sep 5 23:52:29.535290 systemd[1538]: Reached target paths.target - Paths. Sep 5 23:52:29.535302 systemd[1538]: Reached target timers.target - Timers. Sep 5 23:52:29.536357 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:52:29.544725 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:52:29.544780 systemd[1538]: Reached target sockets.target - Sockets. Sep 5 23:52:29.544791 systemd[1538]: Reached target basic.target - Basic System. Sep 5 23:52:29.544824 systemd[1538]: Reached target default.target - Main User Target. Sep 5 23:52:29.544846 systemd[1538]: Startup finished in 93ms. Sep 5 23:52:29.545038 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:52:29.546147 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:52:29.611457 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:41582.service - OpenSSH per-connection server daemon (10.0.0.1:41582). Sep 5 23:52:29.655431 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 41582 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:29.656665 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:29.662794 systemd-logind[1422]: New session 2 of user core. Sep 5 23:52:29.673631 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:52:29.727788 sshd[1549]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:29.740774 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:41582.service: Deactivated successfully. Sep 5 23:52:29.742619 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:52:29.744617 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:52:29.745084 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:41598.service - OpenSSH per-connection server daemon (10.0.0.1:41598). Sep 5 23:52:29.746172 systemd-logind[1422]: Removed session 2. Sep 5 23:52:29.781155 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:29.782297 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:29.785753 systemd-logind[1422]: New session 3 of user core. Sep 5 23:52:29.795589 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:52:29.842637 sshd[1556]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:29.851822 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:41598.service: Deactivated successfully. Sep 5 23:52:29.853215 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:52:29.854429 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:52:29.855515 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Sep 5 23:52:29.856199 systemd-logind[1422]: Removed session 3. Sep 5 23:52:29.890796 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:29.891949 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:29.895044 systemd-logind[1422]: New session 4 of user core. Sep 5 23:52:29.916605 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:52:29.967492 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:29.979628 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:41608.service: Deactivated successfully. Sep 5 23:52:29.981346 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:52:29.982533 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:52:29.984156 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). Sep 5 23:52:29.985190 systemd-logind[1422]: Removed session 4. Sep 5 23:52:30.018842 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:30.019987 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:30.023682 systemd-logind[1422]: New session 5 of user core. Sep 5 23:52:30.033626 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:52:30.087652 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:52:30.087915 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:30.100150 sudo[1573]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:30.101711 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:30.115703 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:33778.service: Deactivated successfully. Sep 5 23:52:30.117815 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:52:30.118987 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:52:30.120157 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). Sep 5 23:52:30.120907 systemd-logind[1422]: Removed session 5. Sep 5 23:52:30.155328 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:30.156573 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:30.160235 systemd-logind[1422]: New session 6 of user core. Sep 5 23:52:30.168689 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:52:30.220099 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:52:30.220398 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:30.223846 sudo[1582]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:30.228298 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:52:30.228588 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:30.248739 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:52:30.249928 auditctl[1585]: No rules Sep 5 23:52:30.250809 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:52:30.252511 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:52:30.254139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:52:30.276425 augenrules[1603]: No rules Sep 5 23:52:30.277675 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:52:30.278665 sudo[1581]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:30.280379 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:30.292791 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:33780.service: Deactivated successfully. Sep 5 23:52:30.294451 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:52:30.296566 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:52:30.296948 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:33796.service - OpenSSH per-connection server daemon (10.0.0.1:33796). Sep 5 23:52:30.298068 systemd-logind[1422]: Removed session 6. Sep 5 23:52:30.332687 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 33796 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:52:30.333871 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:30.337287 systemd-logind[1422]: New session 7 of user core. Sep 5 23:52:30.353631 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:52:30.404346 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:52:30.404661 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:30.689741 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:52:30.689936 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:52:30.898816 dockerd[1632]: time="2025-09-05T23:52:30.898704787Z" level=info msg="Starting up" Sep 5 23:52:31.041887 dockerd[1632]: time="2025-09-05T23:52:31.041329540Z" level=info msg="Loading containers: start." Sep 5 23:52:31.132495 kernel: Initializing XFRM netlink socket Sep 5 23:52:31.191487 systemd-networkd[1380]: docker0: Link UP Sep 5 23:52:31.205751 dockerd[1632]: time="2025-09-05T23:52:31.205698763Z" level=info msg="Loading containers: done." Sep 5 23:52:31.218087 dockerd[1632]: time="2025-09-05T23:52:31.218034169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:52:31.218229 dockerd[1632]: time="2025-09-05T23:52:31.218132774Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:52:31.218257 dockerd[1632]: time="2025-09-05T23:52:31.218228757Z" level=info msg="Daemon has completed initialization" Sep 5 23:52:31.242643 dockerd[1632]: time="2025-09-05T23:52:31.242524815Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:52:31.242890 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:52:31.886187 containerd[1435]: time="2025-09-05T23:52:31.885942863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 23:52:32.501563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195543073.mount: Deactivated successfully. Sep 5 23:52:33.547035 containerd[1435]: time="2025-09-05T23:52:33.546974054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.547426 containerd[1435]: time="2025-09-05T23:52:33.547374757Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 5 23:52:33.548457 containerd[1435]: time="2025-09-05T23:52:33.548427287Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.552197 containerd[1435]: time="2025-09-05T23:52:33.551537486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:33.553712 containerd[1435]: time="2025-09-05T23:52:33.553670878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.667685529s" Sep 5 23:52:33.553817 containerd[1435]: time="2025-09-05T23:52:33.553801117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 23:52:33.555379 containerd[1435]: time="2025-09-05T23:52:33.555349173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 23:52:34.966738 containerd[1435]: time="2025-09-05T23:52:34.966681174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:34.967496 containerd[1435]: time="2025-09-05T23:52:34.967438309Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 5 23:52:34.968043 containerd[1435]: time="2025-09-05T23:52:34.968016144Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:34.971140 containerd[1435]: time="2025-09-05T23:52:34.971107434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:34.972208 containerd[1435]: time="2025-09-05T23:52:34.972177775Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.416794693s" Sep 5 23:52:34.972271 containerd[1435]: time="2025-09-05T23:52:34.972211978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 23:52:34.973184 containerd[1435]: time="2025-09-05T23:52:34.972661560Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 23:52:36.109200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:52:36.122661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:36.129182 containerd[1435]: time="2025-09-05T23:52:36.129141736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:36.129653 containerd[1435]: time="2025-09-05T23:52:36.129543170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 5 23:52:36.130535 containerd[1435]: time="2025-09-05T23:52:36.130457529Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:36.134707 containerd[1435]: time="2025-09-05T23:52:36.134662728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:36.136365 containerd[1435]: time="2025-09-05T23:52:36.135997057Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.163302276s" Sep 5 23:52:36.136365 containerd[1435]: time="2025-09-05T23:52:36.136034366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 23:52:36.138034 containerd[1435]: time="2025-09-05T23:52:36.137555880Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 23:52:36.231123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:36.235408 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:36.303994 kubelet[1851]: E0905 23:52:36.303951 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:36.307684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:36.307812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:37.165072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202824200.mount: Deactivated successfully. Sep 5 23:52:37.523499 containerd[1435]: time="2025-09-05T23:52:37.522681963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:37.523805 containerd[1435]: time="2025-09-05T23:52:37.523616661Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 5 23:52:37.524282 containerd[1435]: time="2025-09-05T23:52:37.524253511Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:37.526368 containerd[1435]: time="2025-09-05T23:52:37.526325367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:37.526968 containerd[1435]: time="2025-09-05T23:52:37.526940205Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.389344539s" Sep 5 23:52:37.527015 containerd[1435]: time="2025-09-05T23:52:37.526975018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 23:52:37.527545 containerd[1435]: time="2025-09-05T23:52:37.527522462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 23:52:38.086263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064854991.mount: Deactivated successfully. Sep 5 23:52:38.730375 containerd[1435]: time="2025-09-05T23:52:38.730332010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:38.731286 containerd[1435]: time="2025-09-05T23:52:38.730935553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 5 23:52:38.732112 containerd[1435]: time="2025-09-05T23:52:38.732065328Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:38.735575 containerd[1435]: time="2025-09-05T23:52:38.735538746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:38.736770 containerd[1435]: time="2025-09-05T23:52:38.736733147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.209177463s" Sep 5 23:52:38.736826 containerd[1435]: time="2025-09-05T23:52:38.736770646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 23:52:38.737165 containerd[1435]: time="2025-09-05T23:52:38.737141693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:52:39.167307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110834446.mount: Deactivated successfully. Sep 5 23:52:39.172520 containerd[1435]: time="2025-09-05T23:52:39.172474576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:39.173000 containerd[1435]: time="2025-09-05T23:52:39.172965107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 23:52:39.173792 containerd[1435]: time="2025-09-05T23:52:39.173767547Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:39.175997 containerd[1435]: time="2025-09-05T23:52:39.175945884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:39.177039 containerd[1435]: time="2025-09-05T23:52:39.176922796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 439.75118ms" Sep 5 23:52:39.177039 containerd[1435]: time="2025-09-05T23:52:39.176956716Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:52:39.177818 containerd[1435]: time="2025-09-05T23:52:39.177800300Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 23:52:39.708124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995351338.mount: Deactivated successfully. Sep 5 23:52:41.457400 containerd[1435]: time="2025-09-05T23:52:41.457343725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:41.457873 containerd[1435]: time="2025-09-05T23:52:41.457833208Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 5 23:52:41.458883 containerd[1435]: time="2025-09-05T23:52:41.458849187Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:41.463495 containerd[1435]: time="2025-09-05T23:52:41.463181301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:41.465066 containerd[1435]: time="2025-09-05T23:52:41.465027592Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.287201267s" Sep 5 23:52:41.465120 containerd[1435]: time="2025-09-05T23:52:41.465071633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 23:52:46.559092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:52:46.569709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:46.690332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:46.694560 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:46.728456 kubelet[2008]: E0905 23:52:46.728391 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:46.732223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:46.732357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:47.662604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:47.671680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:47.692293 systemd[1]: Reloading requested from client PID 2025 ('systemctl') (unit session-7.scope)... Sep 5 23:52:47.692310 systemd[1]: Reloading... Sep 5 23:52:47.764570 zram_generator::config[2067]: No configuration found. Sep 5 23:52:47.930559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:47.983015 systemd[1]: Reloading finished in 290 ms. Sep 5 23:52:48.023552 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:48.025754 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:52:48.027509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:48.029066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:48.134957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:48.138403 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:52:48.173817 kubelet[2111]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:48.173817 kubelet[2111]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:52:48.173817 kubelet[2111]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:48.174177 kubelet[2111]: I0905 23:52:48.173941 2111 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:52:49.167707 kubelet[2111]: I0905 23:52:49.167666 2111 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:52:49.167707 kubelet[2111]: I0905 23:52:49.167700 2111 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:52:49.168094 kubelet[2111]: I0905 23:52:49.168078 2111 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:52:49.198416 kubelet[2111]: I0905 23:52:49.198224 2111 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:52:49.200104 kubelet[2111]: E0905 23:52:49.199629 2111 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:49.204827 kubelet[2111]: E0905 23:52:49.204792 2111 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:52:49.204827 kubelet[2111]: I0905 23:52:49.204819 2111 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:52:49.208751 kubelet[2111]: I0905 23:52:49.208336 2111 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:52:49.209628 kubelet[2111]: I0905 23:52:49.209195 2111 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:52:49.209628 kubelet[2111]: I0905 23:52:49.209581 2111 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:52:49.209956 kubelet[2111]: I0905 23:52:49.209789 2111 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:52:49.210242 kubelet[2111]: I0905 23:52:49.210224 2111 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:52:49.210366 kubelet[2111]: I0905 23:52:49.210352 2111 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:52:49.210900 kubelet[2111]: I0905 23:52:49.210698 2111 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:49.213226 kubelet[2111]: I0905 23:52:49.212958 2111 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:52:49.213226 kubelet[2111]: I0905 23:52:49.213210 2111 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:52:49.213628 kubelet[2111]: I0905 23:52:49.213376 2111 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:52:49.213628 kubelet[2111]: I0905 23:52:49.213455 2111 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:52:49.217316 kubelet[2111]: W0905 23:52:49.216828 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:49.217316 kubelet[2111]: E0905 23:52:49.216884 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:49.217316 kubelet[2111]: W0905 23:52:49.217254 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:49.217316 kubelet[2111]: E0905 23:52:49.217287 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:49.222423 kubelet[2111]: I0905 23:52:49.222390 2111 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:52:49.223613 kubelet[2111]: I0905 23:52:49.223597 2111 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:52:49.223715 kubelet[2111]: W0905 23:52:49.223703 2111 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:52:49.224806 kubelet[2111]: I0905 23:52:49.224667 2111 server.go:1274] "Started kubelet" Sep 5 23:52:49.225551 kubelet[2111]: I0905 23:52:49.225501 2111 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:52:49.225599 kubelet[2111]: I0905 23:52:49.225567 2111 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:52:49.230953 kubelet[2111]: I0905 23:52:49.230924 2111 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:52:49.235485 kubelet[2111]: I0905 23:52:49.234215 2111 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:52:49.235485 kubelet[2111]: I0905 23:52:49.235274 2111 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:52:49.235767 kubelet[2111]: E0905 23:52:49.234185 2111 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862880738a8a2f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 23:52:49.224639217 +0000 UTC m=+1.083239468,LastTimestamp:2025-09-05 23:52:49.224639217 +0000 UTC m=+1.083239468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 23:52:49.236105 kubelet[2111]: I0905 23:52:49.236074 2111 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:52:49.236229 kubelet[2111]: I0905 23:52:49.236211 2111 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:52:49.236281 kubelet[2111]: I0905 23:52:49.236268 2111 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:52:49.236322 kubelet[2111]: E0905 23:52:49.236299 2111 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:52:49.236616 kubelet[2111]: E0905 23:52:49.236587 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Sep 5 23:52:49.237125 kubelet[2111]: W0905 23:52:49.236683 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:49.237125 kubelet[2111]: E0905 23:52:49.236729 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:49.237785 kubelet[2111]: I0905 23:52:49.237761 2111 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:52:49.238647 kubelet[2111]: I0905 23:52:49.238624 2111 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:52:49.239708 kubelet[2111]: I0905 23:52:49.239674 2111 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:52:49.240308 kubelet[2111]: E0905 23:52:49.240287 2111 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:52:49.244178 kubelet[2111]: I0905 23:52:49.244154 2111 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:52:49.254904 kubelet[2111]: I0905 23:52:49.254885 2111 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:52:49.254904 kubelet[2111]: I0905 23:52:49.254900 2111 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:52:49.255003 kubelet[2111]: I0905 23:52:49.254916 2111 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:49.261600 kubelet[2111]: I0905 23:52:49.261561 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:52:49.262808 kubelet[2111]: I0905 23:52:49.262781 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:52:49.262891 kubelet[2111]: I0905 23:52:49.262815 2111 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:52:49.262891 kubelet[2111]: I0905 23:52:49.262837 2111 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:52:49.262965 kubelet[2111]: E0905 23:52:49.262882 2111 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:52:49.330085 kubelet[2111]: I0905 23:52:49.330025 2111 policy_none.go:49] "None policy: Start" Sep 5 23:52:49.330751 kubelet[2111]: W0905 23:52:49.330691 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:49.330826 kubelet[2111]: E0905 23:52:49.330752 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:49.331240 kubelet[2111]: I0905 23:52:49.330948 2111 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:52:49.331240 kubelet[2111]: I0905 23:52:49.330972 2111 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:52:49.336402 kubelet[2111]: E0905 23:52:49.336380 2111 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:52:49.339702 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:52:49.355175 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:52:49.358326 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:52:49.363342 kubelet[2111]: E0905 23:52:49.363303 2111 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 23:52:49.369850 kubelet[2111]: I0905 23:52:49.369817 2111 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:52:49.370436 kubelet[2111]: I0905 23:52:49.370399 2111 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:52:49.370528 kubelet[2111]: I0905 23:52:49.370465 2111 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:52:49.370993 kubelet[2111]: I0905 23:52:49.370677 2111 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:52:49.372816 kubelet[2111]: E0905 23:52:49.372793 2111 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 23:52:49.440807 kubelet[2111]: E0905 23:52:49.438079 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Sep 5 23:52:49.473283 kubelet[2111]: I0905 23:52:49.473218 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:49.473749 kubelet[2111]: E0905 23:52:49.473714 2111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 5 23:52:49.583237 systemd[1]: Created slice kubepods-burstable-pod45a24210c8e3cd8249236bee459ee37f.slice - libcontainer container kubepods-burstable-pod45a24210c8e3cd8249236bee459ee37f.slice. Sep 5 23:52:49.611995 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 23:52:49.626296 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 23:52:49.678945 kubelet[2111]: I0905 23:52:49.675439 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:49.678945 kubelet[2111]: E0905 23:52:49.676387 2111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 5 23:52:49.738361 kubelet[2111]: I0905 23:52:49.738247 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:49.738361 kubelet[2111]: I0905 23:52:49.738293 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:49.738361 kubelet[2111]: I0905 23:52:49.738313 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:49.738361 kubelet[2111]: I0905 23:52:49.738330 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:49.738361 kubelet[2111]: I0905 23:52:49.738350 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:49.739042 kubelet[2111]: I0905 23:52:49.738364 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:49.739042 kubelet[2111]: I0905 23:52:49.738378 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:49.739042 kubelet[2111]: I0905 23:52:49.738393 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:52:49.739042 kubelet[2111]: I0905 23:52:49.738410 2111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:49.839432 kubelet[2111]: E0905 23:52:49.839377 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Sep 5 23:52:49.905235 kubelet[2111]: E0905 23:52:49.904873 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:49.906133 containerd[1435]: time="2025-09-05T23:52:49.905777739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45a24210c8e3cd8249236bee459ee37f,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:49.917484 kubelet[2111]: E0905 23:52:49.917427 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:49.918041 containerd[1435]: time="2025-09-05T23:52:49.917853789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:49.938446 kubelet[2111]: E0905 23:52:49.938159 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:49.939034 containerd[1435]: time="2025-09-05T23:52:49.938836920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:50.078362 kubelet[2111]: I0905 23:52:50.077977 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:50.078362 kubelet[2111]: E0905 23:52:50.078283 2111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 5 23:52:50.216214 kubelet[2111]: W0905 23:52:50.216120 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:50.216214 kubelet[2111]: E0905 23:52:50.216188 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:50.640301 kubelet[2111]: E0905 23:52:50.640208 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Sep 5 23:52:50.681617 kubelet[2111]: W0905 23:52:50.681571 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:50.681726 kubelet[2111]: E0905 23:52:50.681650 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:50.684537 kubelet[2111]: W0905 23:52:50.684478 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:50.684537 kubelet[2111]: E0905 23:52:50.684536 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:50.781974 kubelet[2111]: W0905 23:52:50.778494 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:50.781974 kubelet[2111]: E0905 23:52:50.778537 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:50.880006 kubelet[2111]: I0905 23:52:50.879970 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:50.880389 kubelet[2111]: E0905 23:52:50.880353 2111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 5 23:52:51.357207 kubelet[2111]: E0905 23:52:51.357164 2111 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:51.884552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747100723.mount: Deactivated successfully. Sep 5 23:52:52.053157 containerd[1435]: time="2025-09-05T23:52:52.052586488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:52.067799 containerd[1435]: time="2025-09-05T23:52:52.067746082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:52.082405 containerd[1435]: time="2025-09-05T23:52:52.082360739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 5 23:52:52.102993 containerd[1435]: time="2025-09-05T23:52:52.102763492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:52:52.131424 containerd[1435]: time="2025-09-05T23:52:52.130774910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:52.165490 containerd[1435]: time="2025-09-05T23:52:52.165359180Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:52.180411 containerd[1435]: time="2025-09-05T23:52:52.180340646Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:52:52.212611 containerd[1435]: time="2025-09-05T23:52:52.212552413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:52.213428 containerd[1435]: time="2025-09-05T23:52:52.213384751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.27447927s" Sep 5 23:52:52.214268 containerd[1435]: time="2025-09-05T23:52:52.214226884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.308363516s" Sep 5 23:52:52.215071 containerd[1435]: time="2025-09-05T23:52:52.215037270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.297102129s" Sep 5 23:52:52.240859 kubelet[2111]: E0905 23:52:52.240807 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="3.2s" Sep 5 23:52:52.398190 containerd[1435]: time="2025-09-05T23:52:52.398008995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:52.398499 containerd[1435]: time="2025-09-05T23:52:52.398160853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:52.398499 containerd[1435]: time="2025-09-05T23:52:52.398189641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.398499 containerd[1435]: time="2025-09-05T23:52:52.398291879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.398757 containerd[1435]: time="2025-09-05T23:52:52.398615545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:52.398757 containerd[1435]: time="2025-09-05T23:52:52.398674201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:52.399285 containerd[1435]: time="2025-09-05T23:52:52.398769042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.399412 containerd[1435]: time="2025-09-05T23:52:52.399362398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.404652 containerd[1435]: time="2025-09-05T23:52:52.404311718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:52.404652 containerd[1435]: time="2025-09-05T23:52:52.404385768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:52.404652 containerd[1435]: time="2025-09-05T23:52:52.404397363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.407071 containerd[1435]: time="2025-09-05T23:52:52.404532507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:52.428772 systemd[1]: Started cri-containerd-06886adc9361272a1614ef168dbd62bd72c12681768e4c22ee95fa2383fa3a12.scope - libcontainer container 06886adc9361272a1614ef168dbd62bd72c12681768e4c22ee95fa2383fa3a12. Sep 5 23:52:52.430549 systemd[1]: Started cri-containerd-a13ed6593d237511866153df668d4f84659bcd9974aa2e57c1c0bece259e7c87.scope - libcontainer container a13ed6593d237511866153df668d4f84659bcd9974aa2e57c1c0bece259e7c87. Sep 5 23:52:52.434339 systemd[1]: Started cri-containerd-0cec1556e5e345643b2ec1e76b3da6f458f9966be093f98a0938d6f50bd04807.scope - libcontainer container 0cec1556e5e345643b2ec1e76b3da6f458f9966be093f98a0938d6f50bd04807. Sep 5 23:52:52.478762 containerd[1435]: time="2025-09-05T23:52:52.478715420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"06886adc9361272a1614ef168dbd62bd72c12681768e4c22ee95fa2383fa3a12\"" Sep 5 23:52:52.481786 kubelet[2111]: I0905 23:52:52.481749 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:52.482902 kubelet[2111]: E0905 23:52:52.482837 2111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Sep 5 23:52:52.483651 kubelet[2111]: E0905 23:52:52.483624 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:52.488485 containerd[1435]: time="2025-09-05T23:52:52.486328603Z" level=info msg="CreateContainer within sandbox \"06886adc9361272a1614ef168dbd62bd72c12681768e4c22ee95fa2383fa3a12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:52:52.491651 containerd[1435]: time="2025-09-05T23:52:52.491511427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a13ed6593d237511866153df668d4f84659bcd9974aa2e57c1c0bece259e7c87\"" Sep 5 23:52:52.492088 containerd[1435]: time="2025-09-05T23:52:52.492007343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:45a24210c8e3cd8249236bee459ee37f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cec1556e5e345643b2ec1e76b3da6f458f9966be093f98a0938d6f50bd04807\"" Sep 5 23:52:52.492840 kubelet[2111]: E0905 23:52:52.492268 2111 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1862880738a8a2f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 23:52:49.224639217 +0000 UTC m=+1.083239468,LastTimestamp:2025-09-05 23:52:49.224639217 +0000 UTC m=+1.083239468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 23:52:52.492840 kubelet[2111]: E0905 23:52:52.492627 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:52.492840 kubelet[2111]: E0905 23:52:52.492696 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:52.495778 containerd[1435]: time="2025-09-05T23:52:52.495729289Z" level=info msg="CreateContainer within sandbox \"0cec1556e5e345643b2ec1e76b3da6f458f9966be093f98a0938d6f50bd04807\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:52:52.499106 containerd[1435]: time="2025-09-05T23:52:52.498686711Z" level=info msg="CreateContainer within sandbox \"a13ed6593d237511866153df668d4f84659bcd9974aa2e57c1c0bece259e7c87\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:52:52.617919 containerd[1435]: time="2025-09-05T23:52:52.617863204Z" level=info msg="CreateContainer within sandbox \"06886adc9361272a1614ef168dbd62bd72c12681768e4c22ee95fa2383fa3a12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e93bc42a124368fb693207bd018a7cfadad17143f5c32100c6e7c205f112fd65\"" Sep 5 23:52:52.619104 containerd[1435]: time="2025-09-05T23:52:52.619063549Z" level=info msg="StartContainer for \"e93bc42a124368fb693207bd018a7cfadad17143f5c32100c6e7c205f112fd65\"" Sep 5 23:52:52.642292 containerd[1435]: time="2025-09-05T23:52:52.641625852Z" level=info msg="CreateContainer within sandbox \"0cec1556e5e345643b2ec1e76b3da6f458f9966be093f98a0938d6f50bd04807\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"019b03bf51fdc38ecf7676910549bf1b80a7ef6cc0ebd1932a6e26466036e0b1\"" Sep 5 23:52:52.642292 containerd[1435]: time="2025-09-05T23:52:52.642252474Z" level=info msg="StartContainer for \"019b03bf51fdc38ecf7676910549bf1b80a7ef6cc0ebd1932a6e26466036e0b1\"" Sep 5 23:52:52.653843 containerd[1435]: time="2025-09-05T23:52:52.653765010Z" level=info msg="CreateContainer within sandbox \"a13ed6593d237511866153df668d4f84659bcd9974aa2e57c1c0bece259e7c87\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe7283567b69f63ab09731e1b4af20d5617f7ff381f8c86a84ecdd202aac058e\"" Sep 5 23:52:52.654366 containerd[1435]: time="2025-09-05T23:52:52.654318062Z" level=info msg="StartContainer for \"fe7283567b69f63ab09731e1b4af20d5617f7ff381f8c86a84ecdd202aac058e\"" Sep 5 23:52:52.656016 systemd[1]: Started cri-containerd-e93bc42a124368fb693207bd018a7cfadad17143f5c32100c6e7c205f112fd65.scope - libcontainer container e93bc42a124368fb693207bd018a7cfadad17143f5c32100c6e7c205f112fd65. Sep 5 23:52:52.677179 systemd[1]: Started cri-containerd-019b03bf51fdc38ecf7676910549bf1b80a7ef6cc0ebd1932a6e26466036e0b1.scope - libcontainer container 019b03bf51fdc38ecf7676910549bf1b80a7ef6cc0ebd1932a6e26466036e0b1. Sep 5 23:52:52.697778 systemd[1]: Started cri-containerd-fe7283567b69f63ab09731e1b4af20d5617f7ff381f8c86a84ecdd202aac058e.scope - libcontainer container fe7283567b69f63ab09731e1b4af20d5617f7ff381f8c86a84ecdd202aac058e. Sep 5 23:52:52.747495 kubelet[2111]: W0905 23:52:52.747210 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:52.747495 kubelet[2111]: E0905 23:52:52.747309 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:52.770158 containerd[1435]: time="2025-09-05T23:52:52.769970167Z" level=info msg="StartContainer for \"e93bc42a124368fb693207bd018a7cfadad17143f5c32100c6e7c205f112fd65\" returns successfully" Sep 5 23:52:52.800071 kubelet[2111]: W0905 23:52:52.799989 2111 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Sep 5 23:52:52.800071 kubelet[2111]: E0905 23:52:52.800071 2111 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" Sep 5 23:52:52.808522 containerd[1435]: time="2025-09-05T23:52:52.808455629Z" level=info msg="StartContainer for \"fe7283567b69f63ab09731e1b4af20d5617f7ff381f8c86a84ecdd202aac058e\" returns successfully" Sep 5 23:52:52.808654 containerd[1435]: time="2025-09-05T23:52:52.808482738Z" level=info msg="StartContainer for \"019b03bf51fdc38ecf7676910549bf1b80a7ef6cc0ebd1932a6e26466036e0b1\" returns successfully" Sep 5 23:52:53.274979 kubelet[2111]: E0905 23:52:53.274937 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:53.277334 kubelet[2111]: E0905 23:52:53.277303 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:53.279516 kubelet[2111]: E0905 23:52:53.279493 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:54.282014 kubelet[2111]: E0905 23:52:54.281973 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:54.282345 kubelet[2111]: E0905 23:52:54.282283 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:55.022950 kubelet[2111]: E0905 23:52:55.022886 2111 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 23:52:55.220205 kubelet[2111]: I0905 23:52:55.219922 2111 apiserver.go:52] "Watching apiserver" Sep 5 23:52:55.237365 kubelet[2111]: I0905 23:52:55.237280 2111 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:52:55.283976 kubelet[2111]: E0905 23:52:55.283801 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:55.376892 kubelet[2111]: E0905 23:52:55.376837 2111 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 5 23:52:55.444837 kubelet[2111]: E0905 23:52:55.444792 2111 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 23:52:55.685281 kubelet[2111]: I0905 23:52:55.685182 2111 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:55.696792 kubelet[2111]: I0905 23:52:55.696110 2111 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:52:56.577287 kubelet[2111]: E0905 23:52:56.577248 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:57.076981 systemd[1]: Reloading requested from client PID 2393 ('systemctl') (unit session-7.scope)... Sep 5 23:52:57.076996 systemd[1]: Reloading... Sep 5 23:52:57.135511 zram_generator::config[2438]: No configuration found. Sep 5 23:52:57.209931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:57.274789 systemd[1]: Reloading finished in 197 ms. Sep 5 23:52:57.286529 kubelet[2111]: E0905 23:52:57.286501 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:57.310524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:57.328619 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:52:57.328819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:57.328871 systemd[1]: kubelet.service: Consumed 1.460s CPU time, 131.2M memory peak, 0B memory swap peak. Sep 5 23:52:57.333786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:57.433274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:57.436622 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:52:57.478384 kubelet[2474]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:57.478384 kubelet[2474]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 23:52:57.478384 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:57.478731 kubelet[2474]: I0905 23:52:57.478429 2474 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:52:57.486504 kubelet[2474]: I0905 23:52:57.485341 2474 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 23:52:57.486504 kubelet[2474]: I0905 23:52:57.485370 2474 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:52:57.486504 kubelet[2474]: I0905 23:52:57.485578 2474 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 23:52:57.487353 kubelet[2474]: I0905 23:52:57.487319 2474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 23:52:57.490316 kubelet[2474]: I0905 23:52:57.490290 2474 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:52:57.492945 kubelet[2474]: E0905 23:52:57.492908 2474 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:52:57.492945 kubelet[2474]: I0905 23:52:57.492944 2474 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:52:57.495207 kubelet[2474]: I0905 23:52:57.495177 2474 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:52:57.495297 kubelet[2474]: I0905 23:52:57.495285 2474 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 23:52:57.495404 kubelet[2474]: I0905 23:52:57.495382 2474 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:52:57.495642 kubelet[2474]: I0905 23:52:57.495406 2474 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:52:57.495722 kubelet[2474]: I0905 23:52:57.495643 2474 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:52:57.495722 kubelet[2474]: I0905 23:52:57.495654 2474 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 23:52:57.495722 kubelet[2474]: I0905 23:52:57.495687 2474 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:57.495801 kubelet[2474]: I0905 23:52:57.495788 2474 kubelet.go:408] "Attempting to sync node with API server" Sep 5 23:52:57.495828 kubelet[2474]: I0905 23:52:57.495807 2474 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:52:57.495828 kubelet[2474]: I0905 23:52:57.495825 2474 kubelet.go:314] "Adding apiserver pod source" Sep 5 23:52:57.498510 kubelet[2474]: I0905 23:52:57.496191 2474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:52:57.498510 kubelet[2474]: I0905 23:52:57.497034 2474 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:52:57.498510 kubelet[2474]: I0905 23:52:57.497443 2474 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 23:52:57.498510 kubelet[2474]: I0905 23:52:57.497802 2474 server.go:1274] "Started kubelet" Sep 5 23:52:57.499261 kubelet[2474]: I0905 23:52:57.499237 2474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:52:57.501607 kubelet[2474]: I0905 23:52:57.501566 2474 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:52:57.503517 kubelet[2474]: I0905 23:52:57.503497 2474 server.go:449] "Adding debug handlers to kubelet server" Sep 5 23:52:57.505360 kubelet[2474]: I0905 23:52:57.505319 2474 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:52:57.505823 kubelet[2474]: I0905 23:52:57.505806 2474 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:52:57.506285 kubelet[2474]: I0905 23:52:57.506260 2474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:52:57.507789 kubelet[2474]: I0905 23:52:57.507772 2474 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 23:52:57.508015 kubelet[2474]: E0905 23:52:57.507986 2474 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 23:52:57.508350 kubelet[2474]: I0905 23:52:57.508335 2474 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 23:52:57.509180 kubelet[2474]: I0905 23:52:57.509157 2474 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:52:57.509396 kubelet[2474]: I0905 23:52:57.509364 2474 factory.go:221] Registration of the systemd container factory successfully Sep 5 23:52:57.509600 kubelet[2474]: I0905 23:52:57.509581 2474 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:52:57.511753 kubelet[2474]: I0905 23:52:57.511722 2474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 23:52:57.515403 kubelet[2474]: I0905 23:52:57.515376 2474 factory.go:221] Registration of the containerd container factory successfully Sep 5 23:52:57.515839 kubelet[2474]: I0905 23:52:57.515802 2474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 23:52:57.515839 kubelet[2474]: I0905 23:52:57.515827 2474 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 23:52:57.515839 kubelet[2474]: I0905 23:52:57.515842 2474 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 23:52:57.515923 kubelet[2474]: E0905 23:52:57.515879 2474 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:52:57.551395 kubelet[2474]: I0905 23:52:57.551367 2474 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 23:52:57.551584 kubelet[2474]: I0905 23:52:57.551570 2474 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 23:52:57.551667 kubelet[2474]: I0905 23:52:57.551658 2474 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:57.551850 kubelet[2474]: I0905 23:52:57.551834 2474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:52:57.551919 kubelet[2474]: I0905 23:52:57.551896 2474 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:52:57.551963 kubelet[2474]: I0905 23:52:57.551956 2474 policy_none.go:49] "None policy: Start" Sep 5 23:52:57.552733 kubelet[2474]: I0905 23:52:57.552714 2474 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 23:52:57.552733 kubelet[2474]: I0905 23:52:57.552738 2474 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:52:57.552910 kubelet[2474]: I0905 23:52:57.552895 2474 state_mem.go:75] "Updated machine memory state" Sep 5 23:52:57.556737 kubelet[2474]: I0905 23:52:57.556701 2474 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 23:52:57.557194 kubelet[2474]: I0905 23:52:57.557091 2474 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:52:57.557194 kubelet[2474]: I0905 23:52:57.557108 2474 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:52:57.557537 kubelet[2474]: I0905 23:52:57.557302 2474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:52:57.623001 kubelet[2474]: E0905 23:52:57.622890 2474 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.661394 kubelet[2474]: I0905 23:52:57.661368 2474 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 23:52:57.667718 kubelet[2474]: I0905 23:52:57.667679 2474 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 23:52:57.667806 kubelet[2474]: I0905 23:52:57.667753 2474 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 23:52:57.709742 kubelet[2474]: I0905 23:52:57.709686 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:57.709742 kubelet[2474]: I0905 23:52:57.709727 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.709907 kubelet[2474]: I0905 23:52:57.709766 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.709907 kubelet[2474]: I0905 23:52:57.709786 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 23:52:57.709907 kubelet[2474]: I0905 23:52:57.709803 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:57.709907 kubelet[2474]: I0905 23:52:57.709817 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.709907 kubelet[2474]: I0905 23:52:57.709833 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.710016 kubelet[2474]: I0905 23:52:57.709850 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 23:52:57.710016 kubelet[2474]: I0905 23:52:57.709868 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a24210c8e3cd8249236bee459ee37f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"45a24210c8e3cd8249236bee459ee37f\") " pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:57.922651 kubelet[2474]: E0905 23:52:57.922528 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:57.922760 kubelet[2474]: E0905 23:52:57.922649 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:57.923634 kubelet[2474]: E0905 23:52:57.923591 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:58.496867 kubelet[2474]: I0905 23:52:58.496629 2474 apiserver.go:52] "Watching apiserver" Sep 5 23:52:58.508874 kubelet[2474]: I0905 23:52:58.508832 2474 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 23:52:58.539689 kubelet[2474]: E0905 23:52:58.539253 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:58.539689 kubelet[2474]: E0905 23:52:58.539615 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:58.545969 kubelet[2474]: E0905 23:52:58.545918 2474 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 23:52:58.546109 kubelet[2474]: E0905 23:52:58.546095 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:58.558780 kubelet[2474]: I0905 23:52:58.558200 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.558176022 podStartE2EDuration="1.558176022s" podCreationTimestamp="2025-09-05 23:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:58.558116862 +0000 UTC m=+1.118455099" watchObservedRunningTime="2025-09-05 23:52:58.558176022 +0000 UTC m=+1.118514219" Sep 5 23:52:58.576049 kubelet[2474]: I0905 23:52:58.575977 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.575958694 podStartE2EDuration="2.575958694s" podCreationTimestamp="2025-09-05 23:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:58.566241422 +0000 UTC m=+1.126579659" watchObservedRunningTime="2025-09-05 23:52:58.575958694 +0000 UTC m=+1.136296931" Sep 5 23:52:58.585930 kubelet[2474]: I0905 23:52:58.585829 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.585811605 podStartE2EDuration="1.585811605s" podCreationTimestamp="2025-09-05 23:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:58.576180253 +0000 UTC m=+1.136518490" watchObservedRunningTime="2025-09-05 23:52:58.585811605 +0000 UTC m=+1.146149842" Sep 5 23:52:59.540349 kubelet[2474]: E0905 23:52:59.540314 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:52:59.540349 kubelet[2474]: E0905 23:52:59.540351 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:02.902677 kubelet[2474]: I0905 23:53:02.902646 2474 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:53:02.904054 kubelet[2474]: I0905 23:53:02.903269 2474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:53:02.904100 containerd[1435]: time="2025-09-05T23:53:02.903088796Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:53:03.701730 systemd[1]: Created slice kubepods-besteffort-podfe27e428_e1c4_4896_865c_8614c98dbb5c.slice - libcontainer container kubepods-besteffort-podfe27e428_e1c4_4896_865c_8614c98dbb5c.slice. Sep 5 23:53:03.752159 kubelet[2474]: I0905 23:53:03.751556 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe27e428-e1c4-4896-865c-8614c98dbb5c-lib-modules\") pod \"kube-proxy-mr6fs\" (UID: \"fe27e428-e1c4-4896-865c-8614c98dbb5c\") " pod="kube-system/kube-proxy-mr6fs" Sep 5 23:53:03.752159 kubelet[2474]: I0905 23:53:03.751602 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2qb8\" (UniqueName: \"kubernetes.io/projected/fe27e428-e1c4-4896-865c-8614c98dbb5c-kube-api-access-q2qb8\") pod \"kube-proxy-mr6fs\" (UID: \"fe27e428-e1c4-4896-865c-8614c98dbb5c\") " pod="kube-system/kube-proxy-mr6fs" Sep 5 23:53:03.752159 kubelet[2474]: I0905 23:53:03.751659 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe27e428-e1c4-4896-865c-8614c98dbb5c-kube-proxy\") pod \"kube-proxy-mr6fs\" (UID: \"fe27e428-e1c4-4896-865c-8614c98dbb5c\") " pod="kube-system/kube-proxy-mr6fs" Sep 5 23:53:03.752159 kubelet[2474]: I0905 23:53:03.751678 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe27e428-e1c4-4896-865c-8614c98dbb5c-xtables-lock\") pod \"kube-proxy-mr6fs\" (UID: \"fe27e428-e1c4-4896-865c-8614c98dbb5c\") " pod="kube-system/kube-proxy-mr6fs" Sep 5 23:53:03.963122 systemd[1]: Created slice kubepods-besteffort-podd14217c8_b80b_49c2_96ac_d7961273befb.slice - libcontainer container kubepods-besteffort-podd14217c8_b80b_49c2_96ac_d7961273befb.slice. Sep 5 23:53:04.011046 kubelet[2474]: E0905 23:53:04.011011 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:04.011998 containerd[1435]: time="2025-09-05T23:53:04.011928093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr6fs,Uid:fe27e428-e1c4-4896-865c-8614c98dbb5c,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:04.054177 kubelet[2474]: I0905 23:53:04.054096 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d14217c8-b80b-49c2-96ac-d7961273befb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-pxwxz\" (UID: \"d14217c8-b80b-49c2-96ac-d7961273befb\") " pod="tigera-operator/tigera-operator-58fc44c59b-pxwxz" Sep 5 23:53:04.054177 kubelet[2474]: I0905 23:53:04.054152 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r475\" (UniqueName: \"kubernetes.io/projected/d14217c8-b80b-49c2-96ac-d7961273befb-kube-api-access-8r475\") pod \"tigera-operator-58fc44c59b-pxwxz\" (UID: \"d14217c8-b80b-49c2-96ac-d7961273befb\") " pod="tigera-operator/tigera-operator-58fc44c59b-pxwxz" Sep 5 23:53:04.080025 containerd[1435]: time="2025-09-05T23:53:04.079426373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:04.080025 containerd[1435]: time="2025-09-05T23:53:04.079803411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:04.080025 containerd[1435]: time="2025-09-05T23:53:04.079839651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:04.080025 containerd[1435]: time="2025-09-05T23:53:04.079947851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:04.101729 systemd[1]: Started cri-containerd-2f7288e136db27970fd1368ebf89fae3728bafecc02a0261f0291161d2715449.scope - libcontainer container 2f7288e136db27970fd1368ebf89fae3728bafecc02a0261f0291161d2715449. Sep 5 23:53:04.123525 containerd[1435]: time="2025-09-05T23:53:04.123465056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr6fs,Uid:fe27e428-e1c4-4896-865c-8614c98dbb5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7288e136db27970fd1368ebf89fae3728bafecc02a0261f0291161d2715449\"" Sep 5 23:53:04.124421 kubelet[2474]: E0905 23:53:04.124399 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:04.127710 containerd[1435]: time="2025-09-05T23:53:04.127589601Z" level=info msg="CreateContainer within sandbox \"2f7288e136db27970fd1368ebf89fae3728bafecc02a0261f0291161d2715449\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:53:04.254825 containerd[1435]: time="2025-09-05T23:53:04.254160871Z" level=info msg="CreateContainer within sandbox \"2f7288e136db27970fd1368ebf89fae3728bafecc02a0261f0291161d2715449\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"579e363b3d05f874c2600722f51f7acc58c6dadf98d0faa75b2f53762c97e0b1\"" Sep 5 23:53:04.254969 containerd[1435]: time="2025-09-05T23:53:04.254936748Z" level=info msg="StartContainer for \"579e363b3d05f874c2600722f51f7acc58c6dadf98d0faa75b2f53762c97e0b1\"" Sep 5 23:53:04.268013 containerd[1435]: time="2025-09-05T23:53:04.267652183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pxwxz,Uid:d14217c8-b80b-49c2-96ac-d7961273befb,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:53:04.287647 systemd[1]: Started cri-containerd-579e363b3d05f874c2600722f51f7acc58c6dadf98d0faa75b2f53762c97e0b1.scope - libcontainer container 579e363b3d05f874c2600722f51f7acc58c6dadf98d0faa75b2f53762c97e0b1. Sep 5 23:53:04.337796 containerd[1435]: time="2025-09-05T23:53:04.337742933Z" level=info msg="StartContainer for \"579e363b3d05f874c2600722f51f7acc58c6dadf98d0faa75b2f53762c97e0b1\" returns successfully" Sep 5 23:53:04.459887 containerd[1435]: time="2025-09-05T23:53:04.459620979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:04.459887 containerd[1435]: time="2025-09-05T23:53:04.459693499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:04.459887 containerd[1435]: time="2025-09-05T23:53:04.459716099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:04.459887 containerd[1435]: time="2025-09-05T23:53:04.459805419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:04.481649 systemd[1]: Started cri-containerd-64f8a99dfb69e093c95efd56dbcab115f2bdd5811a0d333499446acedb6363a2.scope - libcontainer container 64f8a99dfb69e093c95efd56dbcab115f2bdd5811a0d333499446acedb6363a2. Sep 5 23:53:04.506267 containerd[1435]: time="2025-09-05T23:53:04.506095934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pxwxz,Uid:d14217c8-b80b-49c2-96ac-d7961273befb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64f8a99dfb69e093c95efd56dbcab115f2bdd5811a0d333499446acedb6363a2\"" Sep 5 23:53:04.507682 containerd[1435]: time="2025-09-05T23:53:04.507655008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:53:04.549619 kubelet[2474]: E0905 23:53:04.549587 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:04.628583 kubelet[2474]: I0905 23:53:04.628520 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mr6fs" podStartSLOduration=1.6285040579999999 podStartE2EDuration="1.628504058s" podCreationTimestamp="2025-09-05 23:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:04.62800878 +0000 UTC m=+7.188347097" watchObservedRunningTime="2025-09-05 23:53:04.628504058 +0000 UTC m=+7.188842295" Sep 5 23:53:05.921107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555688.mount: Deactivated successfully. Sep 5 23:53:06.210774 containerd[1435]: time="2025-09-05T23:53:06.210319366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:06.210774 containerd[1435]: time="2025-09-05T23:53:06.210699885Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:53:06.212397 containerd[1435]: time="2025-09-05T23:53:06.211964721Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:06.214487 containerd[1435]: time="2025-09-05T23:53:06.214438633Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:06.215581 containerd[1435]: time="2025-09-05T23:53:06.215540189Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.707847461s" Sep 5 23:53:06.215653 containerd[1435]: time="2025-09-05T23:53:06.215583389Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:53:06.217808 containerd[1435]: time="2025-09-05T23:53:06.217768742Z" level=info msg="CreateContainer within sandbox \"64f8a99dfb69e093c95efd56dbcab115f2bdd5811a0d333499446acedb6363a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:53:06.228773 containerd[1435]: time="2025-09-05T23:53:06.228728307Z" level=info msg="CreateContainer within sandbox \"64f8a99dfb69e093c95efd56dbcab115f2bdd5811a0d333499446acedb6363a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01fb94ede9c37ca39f608a8c542f996bcffb00ac9fd302805273056998d4dc5c\"" Sep 5 23:53:06.229278 containerd[1435]: time="2025-09-05T23:53:06.229250105Z" level=info msg="StartContainer for \"01fb94ede9c37ca39f608a8c542f996bcffb00ac9fd302805273056998d4dc5c\"" Sep 5 23:53:06.256689 systemd[1]: Started cri-containerd-01fb94ede9c37ca39f608a8c542f996bcffb00ac9fd302805273056998d4dc5c.scope - libcontainer container 01fb94ede9c37ca39f608a8c542f996bcffb00ac9fd302805273056998d4dc5c. Sep 5 23:53:06.281934 containerd[1435]: time="2025-09-05T23:53:06.281852777Z" level=info msg="StartContainer for \"01fb94ede9c37ca39f608a8c542f996bcffb00ac9fd302805273056998d4dc5c\" returns successfully" Sep 5 23:53:07.063515 kubelet[2474]: E0905 23:53:07.063114 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:07.081380 kubelet[2474]: I0905 23:53:07.081291 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-pxwxz" podStartSLOduration=2.372130413 podStartE2EDuration="4.081272069s" podCreationTimestamp="2025-09-05 23:53:03 +0000 UTC" firstStartedPulling="2025-09-05 23:53:04.50724501 +0000 UTC m=+7.067583247" lastFinishedPulling="2025-09-05 23:53:06.216386666 +0000 UTC m=+8.776724903" observedRunningTime="2025-09-05 23:53:06.566594905 +0000 UTC m=+9.126933142" watchObservedRunningTime="2025-09-05 23:53:07.081272069 +0000 UTC m=+9.641610266" Sep 5 23:53:07.557300 kubelet[2474]: E0905 23:53:07.557171 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:07.577449 kubelet[2474]: E0905 23:53:07.577410 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:08.155499 update_engine[1423]: I20250905 23:53:08.153517 1423 update_attempter.cc:509] Updating boot flags... Sep 5 23:53:08.210525 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2826) Sep 5 23:53:08.244622 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2827) Sep 5 23:53:08.584847 kubelet[2474]: E0905 23:53:08.584809 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:09.524860 kubelet[2474]: E0905 23:53:09.524829 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:11.714307 sudo[1614]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:11.719063 sshd[1611]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:11.723151 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:33796.service: Deactivated successfully. Sep 5 23:53:11.725828 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:53:11.727573 systemd[1]: session-7.scope: Consumed 7.773s CPU time, 147.2M memory peak, 0B memory swap peak. Sep 5 23:53:11.729164 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:53:11.730155 systemd-logind[1422]: Removed session 7. Sep 5 23:53:17.171081 systemd[1]: Created slice kubepods-besteffort-pod529751e0_7caa_44a5_8ccf_da335e0850ff.slice - libcontainer container kubepods-besteffort-pod529751e0_7caa_44a5_8ccf_da335e0850ff.slice. Sep 5 23:53:17.345960 kubelet[2474]: I0905 23:53:17.345911 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/529751e0-7caa-44a5-8ccf-da335e0850ff-tigera-ca-bundle\") pod \"calico-typha-66c4586678-t4ng4\" (UID: \"529751e0-7caa-44a5-8ccf-da335e0850ff\") " pod="calico-system/calico-typha-66c4586678-t4ng4" Sep 5 23:53:17.345960 kubelet[2474]: I0905 23:53:17.345954 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrww6\" (UniqueName: \"kubernetes.io/projected/529751e0-7caa-44a5-8ccf-da335e0850ff-kube-api-access-wrww6\") pod \"calico-typha-66c4586678-t4ng4\" (UID: \"529751e0-7caa-44a5-8ccf-da335e0850ff\") " pod="calico-system/calico-typha-66c4586678-t4ng4" Sep 5 23:53:17.346292 kubelet[2474]: I0905 23:53:17.345973 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/529751e0-7caa-44a5-8ccf-da335e0850ff-typha-certs\") pod \"calico-typha-66c4586678-t4ng4\" (UID: \"529751e0-7caa-44a5-8ccf-da335e0850ff\") " pod="calico-system/calico-typha-66c4586678-t4ng4" Sep 5 23:53:17.349307 systemd[1]: Created slice kubepods-besteffort-pod6673a26c_7307_4365_b171_a4bc75e4b3b6.slice - libcontainer container kubepods-besteffort-pod6673a26c_7307_4365_b171_a4bc75e4b3b6.slice. Sep 5 23:53:17.447265 kubelet[2474]: I0905 23:53:17.447230 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-cni-log-dir\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447400 kubelet[2474]: I0905 23:53:17.447271 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-var-run-calico\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447400 kubelet[2474]: I0905 23:53:17.447317 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-cni-net-dir\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447400 kubelet[2474]: I0905 23:53:17.447335 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-policysync\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447400 kubelet[2474]: I0905 23:53:17.447350 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6673a26c-7307-4365-b171-a4bc75e4b3b6-tigera-ca-bundle\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447400 kubelet[2474]: I0905 23:53:17.447366 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-lib-modules\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447538 kubelet[2474]: I0905 23:53:17.447393 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-flexvol-driver-host\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447538 kubelet[2474]: I0905 23:53:17.447409 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6673a26c-7307-4365-b171-a4bc75e4b3b6-node-certs\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447538 kubelet[2474]: I0905 23:53:17.447426 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-cni-bin-dir\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447538 kubelet[2474]: I0905 23:53:17.447441 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-var-lib-calico\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447538 kubelet[2474]: I0905 23:53:17.447479 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6673a26c-7307-4365-b171-a4bc75e4b3b6-xtables-lock\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.447642 kubelet[2474]: I0905 23:53:17.447500 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ntd\" (UniqueName: \"kubernetes.io/projected/6673a26c-7307-4365-b171-a4bc75e4b3b6-kube-api-access-w8ntd\") pod \"calico-node-sfgd9\" (UID: \"6673a26c-7307-4365-b171-a4bc75e4b3b6\") " pod="calico-system/calico-node-sfgd9" Sep 5 23:53:17.479032 kubelet[2474]: E0905 23:53:17.478988 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:17.479682 containerd[1435]: time="2025-09-05T23:53:17.479644271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c4586678-t4ng4,Uid:529751e0-7caa-44a5-8ccf-da335e0850ff,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:17.507078 containerd[1435]: time="2025-09-05T23:53:17.506631940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:17.507078 containerd[1435]: time="2025-09-05T23:53:17.506726540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:17.507078 containerd[1435]: time="2025-09-05T23:53:17.506833140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:17.507078 containerd[1435]: time="2025-09-05T23:53:17.506938420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:17.540696 systemd[1]: Started cri-containerd-792895fac18d800ecbf9425092482dbea113033b9e6c49bafb018b413de919c2.scope - libcontainer container 792895fac18d800ecbf9425092482dbea113033b9e6c49bafb018b413de919c2. Sep 5 23:53:17.556630 kubelet[2474]: E0905 23:53:17.556539 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.556630 kubelet[2474]: W0905 23:53:17.556562 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.556630 kubelet[2474]: E0905 23:53:17.556586 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.584353 containerd[1435]: time="2025-09-05T23:53:17.584307914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c4586678-t4ng4,Uid:529751e0-7caa-44a5-8ccf-da335e0850ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"792895fac18d800ecbf9425092482dbea113033b9e6c49bafb018b413de919c2\"" Sep 5 23:53:17.585658 kubelet[2474]: E0905 23:53:17.585131 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:17.586331 containerd[1435]: time="2025-09-05T23:53:17.586293070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:53:17.591250 kubelet[2474]: E0905 23:53:17.591068 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxc2c" podUID="d08ad477-f492-4278-9581-c6fba1569e81" Sep 5 23:53:17.631234 kubelet[2474]: E0905 23:53:17.631200 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.631561 kubelet[2474]: W0905 23:53:17.631429 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.631561 kubelet[2474]: E0905 23:53:17.631482 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.651118 kubelet[2474]: E0905 23:53:17.651062 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.651118 kubelet[2474]: W0905 23:53:17.651088 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.651118 kubelet[2474]: E0905 23:53:17.651109 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.651716 kubelet[2474]: E0905 23:53:17.651683 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.652250 kubelet[2474]: W0905 23:53:17.652115 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.652250 kubelet[2474]: E0905 23:53:17.652146 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.653106 kubelet[2474]: E0905 23:53:17.652906 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.653408 kubelet[2474]: W0905 23:53:17.652925 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.653408 kubelet[2474]: E0905 23:53:17.653367 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.654754 kubelet[2474]: E0905 23:53:17.654674 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.654754 kubelet[2474]: W0905 23:53:17.654698 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.654754 kubelet[2474]: E0905 23:53:17.654714 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.655594 kubelet[2474]: E0905 23:53:17.655095 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.655594 kubelet[2474]: W0905 23:53:17.655123 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.655594 kubelet[2474]: E0905 23:53:17.655139 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.655594 kubelet[2474]: I0905 23:53:17.655160 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d08ad477-f492-4278-9581-c6fba1569e81-kubelet-dir\") pod \"csi-node-driver-nxc2c\" (UID: \"d08ad477-f492-4278-9581-c6fba1569e81\") " pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:17.655594 kubelet[2474]: E0905 23:53:17.655426 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.655594 kubelet[2474]: W0905 23:53:17.655438 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.655594 kubelet[2474]: E0905 23:53:17.655453 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.655918 kubelet[2474]: E0905 23:53:17.655755 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.655918 kubelet[2474]: W0905 23:53:17.655767 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.656118 kubelet[2474]: E0905 23:53:17.655984 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.656266 kubelet[2474]: E0905 23:53:17.656231 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.656333 kubelet[2474]: W0905 23:53:17.656312 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.656441 kubelet[2474]: E0905 23:53:17.656384 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.656672 kubelet[2474]: E0905 23:53:17.656658 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.656824 kubelet[2474]: W0905 23:53:17.656723 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.656824 kubelet[2474]: E0905 23:53:17.656746 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.656973 kubelet[2474]: E0905 23:53:17.656960 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.657032 kubelet[2474]: W0905 23:53:17.657021 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.657141 kubelet[2474]: E0905 23:53:17.657084 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.657343 kubelet[2474]: E0905 23:53:17.657329 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.657402 kubelet[2474]: W0905 23:53:17.657390 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.657504 kubelet[2474]: E0905 23:53:17.657456 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.658459 kubelet[2474]: E0905 23:53:17.658322 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.658459 kubelet[2474]: W0905 23:53:17.658344 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.658459 kubelet[2474]: E0905 23:53:17.658356 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.660175 containerd[1435]: time="2025-09-05T23:53:17.658696293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sfgd9,Uid:6673a26c-7307-4365-b171-a4bc75e4b3b6,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:17.661018 kubelet[2474]: E0905 23:53:17.660604 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.661176 kubelet[2474]: W0905 23:53:17.661154 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.661296 kubelet[2474]: E0905 23:53:17.661278 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.661838 kubelet[2474]: E0905 23:53:17.661820 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.661929 kubelet[2474]: W0905 23:53:17.661917 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.662143 kubelet[2474]: E0905 23:53:17.662112 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.664015 kubelet[2474]: E0905 23:53:17.663859 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.664015 kubelet[2474]: W0905 23:53:17.663877 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.664015 kubelet[2474]: E0905 23:53:17.663891 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.665726 kubelet[2474]: E0905 23:53:17.665556 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.665726 kubelet[2474]: W0905 23:53:17.665580 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.665726 kubelet[2474]: E0905 23:53:17.665594 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.666103 kubelet[2474]: E0905 23:53:17.665965 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.666103 kubelet[2474]: W0905 23:53:17.665976 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.666103 kubelet[2474]: E0905 23:53:17.665987 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.666877 kubelet[2474]: E0905 23:53:17.666742 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.666877 kubelet[2474]: W0905 23:53:17.666756 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.666877 kubelet[2474]: E0905 23:53:17.666774 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.667085 kubelet[2474]: E0905 23:53:17.667066 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.667255 kubelet[2474]: W0905 23:53:17.667128 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.667255 kubelet[2474]: E0905 23:53:17.667144 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.667409 kubelet[2474]: E0905 23:53:17.667391 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.667494 kubelet[2474]: W0905 23:53:17.667461 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.667568 kubelet[2474]: E0905 23:53:17.667557 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.667811 kubelet[2474]: E0905 23:53:17.667797 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.667888 kubelet[2474]: W0905 23:53:17.667876 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.667977 kubelet[2474]: E0905 23:53:17.667957 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.668401 kubelet[2474]: E0905 23:53:17.668375 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.668848 kubelet[2474]: W0905 23:53:17.668657 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.668963 kubelet[2474]: E0905 23:53:17.668945 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.671979 kubelet[2474]: E0905 23:53:17.671223 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.672253 kubelet[2474]: W0905 23:53:17.672106 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.672253 kubelet[2474]: E0905 23:53:17.672154 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.710246 containerd[1435]: time="2025-09-05T23:53:17.709774837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:17.710246 containerd[1435]: time="2025-09-05T23:53:17.709930836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:17.710246 containerd[1435]: time="2025-09-05T23:53:17.710035276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:17.711211 containerd[1435]: time="2025-09-05T23:53:17.710855355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:17.745572 systemd[1]: Started cri-containerd-ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b.scope - libcontainer container ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b. Sep 5 23:53:17.758754 kubelet[2474]: E0905 23:53:17.757041 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.758754 kubelet[2474]: W0905 23:53:17.758248 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.758754 kubelet[2474]: E0905 23:53:17.758283 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.760282 kubelet[2474]: E0905 23:53:17.758837 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.760282 kubelet[2474]: W0905 23:53:17.758849 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.760282 kubelet[2474]: E0905 23:53:17.758863 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.760282 kubelet[2474]: I0905 23:53:17.758884 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d08ad477-f492-4278-9581-c6fba1569e81-registration-dir\") pod \"csi-node-driver-nxc2c\" (UID: \"d08ad477-f492-4278-9581-c6fba1569e81\") " pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:17.764501 kubelet[2474]: E0905 23:53:17.762175 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.764501 kubelet[2474]: W0905 23:53:17.762195 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.764501 kubelet[2474]: E0905 23:53:17.762220 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.764765 kubelet[2474]: I0905 23:53:17.764706 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d08ad477-f492-4278-9581-c6fba1569e81-socket-dir\") pod \"csi-node-driver-nxc2c\" (UID: \"d08ad477-f492-4278-9581-c6fba1569e81\") " pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:17.764887 kubelet[2474]: E0905 23:53:17.764861 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.764887 kubelet[2474]: W0905 23:53:17.764881 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.764949 kubelet[2474]: E0905 23:53:17.764902 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.765184 kubelet[2474]: E0905 23:53:17.765165 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.765184 kubelet[2474]: W0905 23:53:17.765178 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.765275 kubelet[2474]: E0905 23:53:17.765226 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.765449 kubelet[2474]: E0905 23:53:17.765429 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.765449 kubelet[2474]: W0905 23:53:17.765443 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.765642 kubelet[2474]: E0905 23:53:17.765541 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.765666 kubelet[2474]: E0905 23:53:17.765656 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.765690 kubelet[2474]: W0905 23:53:17.765666 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.765690 kubelet[2474]: E0905 23:53:17.765684 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.768754 kubelet[2474]: E0905 23:53:17.768713 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.768754 kubelet[2474]: W0905 23:53:17.768750 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.768942 kubelet[2474]: E0905 23:53:17.768778 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.768942 kubelet[2474]: I0905 23:53:17.768819 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpmq\" (UniqueName: \"kubernetes.io/projected/d08ad477-f492-4278-9581-c6fba1569e81-kube-api-access-5jpmq\") pod \"csi-node-driver-nxc2c\" (UID: \"d08ad477-f492-4278-9581-c6fba1569e81\") " pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:17.769094 kubelet[2474]: E0905 23:53:17.769072 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.769137 kubelet[2474]: W0905 23:53:17.769102 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.769137 kubelet[2474]: E0905 23:53:17.769121 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.772365 kubelet[2474]: E0905 23:53:17.772336 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.772365 kubelet[2474]: W0905 23:53:17.772357 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.772517 kubelet[2474]: E0905 23:53:17.772422 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.772715 kubelet[2474]: E0905 23:53:17.772577 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.772715 kubelet[2474]: W0905 23:53:17.772594 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.772715 kubelet[2474]: E0905 23:53:17.772606 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.772832 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774599 kubelet[2474]: W0905 23:53:17.772850 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.772860 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.772992 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774599 kubelet[2474]: W0905 23:53:17.772999 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.773007 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.773195 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774599 kubelet[2474]: W0905 23:53:17.773204 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.774599 kubelet[2474]: E0905 23:53:17.773213 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774839 kubelet[2474]: I0905 23:53:17.773241 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d08ad477-f492-4278-9581-c6fba1569e81-varrun\") pod \"csi-node-driver-nxc2c\" (UID: \"d08ad477-f492-4278-9581-c6fba1569e81\") " pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:17.774839 kubelet[2474]: E0905 23:53:17.773449 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774839 kubelet[2474]: W0905 23:53:17.773460 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.774839 kubelet[2474]: E0905 23:53:17.773484 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774839 kubelet[2474]: E0905 23:53:17.773626 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774839 kubelet[2474]: W0905 23:53:17.773637 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.774839 kubelet[2474]: E0905 23:53:17.773645 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.774839 kubelet[2474]: E0905 23:53:17.773780 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.774839 kubelet[2474]: W0905 23:53:17.773787 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.775015 kubelet[2474]: E0905 23:53:17.773795 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.830317 containerd[1435]: time="2025-09-05T23:53:17.830205049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sfgd9,Uid:6673a26c-7307-4365-b171-a4bc75e4b3b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\"" Sep 5 23:53:17.874482 kubelet[2474]: E0905 23:53:17.874442 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.874482 kubelet[2474]: W0905 23:53:17.874489 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.874645 kubelet[2474]: E0905 23:53:17.874513 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.874891 kubelet[2474]: E0905 23:53:17.874875 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.874891 kubelet[2474]: W0905 23:53:17.874890 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.874954 kubelet[2474]: E0905 23:53:17.874914 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.875108 kubelet[2474]: E0905 23:53:17.875096 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.875174 kubelet[2474]: W0905 23:53:17.875158 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.875209 kubelet[2474]: E0905 23:53:17.875183 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.875441 kubelet[2474]: E0905 23:53:17.875423 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.875496 kubelet[2474]: W0905 23:53:17.875441 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.875496 kubelet[2474]: E0905 23:53:17.875462 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.875674 kubelet[2474]: E0905 23:53:17.875664 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.875674 kubelet[2474]: W0905 23:53:17.875674 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.875740 kubelet[2474]: E0905 23:53:17.875686 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.875890 kubelet[2474]: E0905 23:53:17.875880 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.875890 kubelet[2474]: W0905 23:53:17.875890 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.875944 kubelet[2474]: E0905 23:53:17.875902 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.876505 kubelet[2474]: E0905 23:53:17.876378 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.876505 kubelet[2474]: W0905 23:53:17.876395 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.876505 kubelet[2474]: E0905 23:53:17.876416 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.876723 kubelet[2474]: E0905 23:53:17.876664 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.876723 kubelet[2474]: W0905 23:53:17.876676 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.876782 kubelet[2474]: E0905 23:53:17.876732 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.877010 kubelet[2474]: E0905 23:53:17.876957 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.877010 kubelet[2474]: W0905 23:53:17.876969 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.877010 kubelet[2474]: E0905 23:53:17.876994 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.877380 kubelet[2474]: E0905 23:53:17.877292 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.877380 kubelet[2474]: W0905 23:53:17.877305 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.877380 kubelet[2474]: E0905 23:53:17.877335 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.877809 kubelet[2474]: E0905 23:53:17.877719 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.877809 kubelet[2474]: W0905 23:53:17.877735 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.877809 kubelet[2474]: E0905 23:53:17.877753 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.878169 kubelet[2474]: E0905 23:53:17.878073 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.878169 kubelet[2474]: W0905 23:53:17.878087 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.878169 kubelet[2474]: E0905 23:53:17.878104 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.878759 kubelet[2474]: E0905 23:53:17.878646 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.878759 kubelet[2474]: W0905 23:53:17.878659 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.878759 kubelet[2474]: E0905 23:53:17.878739 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.879135 kubelet[2474]: E0905 23:53:17.879003 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.879135 kubelet[2474]: W0905 23:53:17.879014 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.879135 kubelet[2474]: E0905 23:53:17.879058 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.879402 kubelet[2474]: E0905 23:53:17.879332 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.879402 kubelet[2474]: W0905 23:53:17.879346 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.879402 kubelet[2474]: E0905 23:53:17.879361 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.879804 kubelet[2474]: E0905 23:53:17.879659 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.879804 kubelet[2474]: W0905 23:53:17.879674 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.879804 kubelet[2474]: E0905 23:53:17.879685 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.881371 kubelet[2474]: E0905 23:53:17.881215 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.881371 kubelet[2474]: W0905 23:53:17.881235 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.881371 kubelet[2474]: E0905 23:53:17.881256 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.881566 kubelet[2474]: E0905 23:53:17.881551 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.881619 kubelet[2474]: W0905 23:53:17.881608 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.881667 kubelet[2474]: E0905 23:53:17.881657 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.881917 kubelet[2474]: E0905 23:53:17.881902 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.881980 kubelet[2474]: W0905 23:53:17.881969 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.882033 kubelet[2474]: E0905 23:53:17.882022 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.882349 kubelet[2474]: E0905 23:53:17.882334 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.882453 kubelet[2474]: W0905 23:53:17.882411 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.882453 kubelet[2474]: E0905 23:53:17.882426 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.890657 kubelet[2474]: E0905 23:53:17.890633 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.890847 kubelet[2474]: W0905 23:53:17.890750 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.890847 kubelet[2474]: E0905 23:53:17.890771 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:18.581383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331881688.mount: Deactivated successfully. Sep 5 23:53:19.193103 containerd[1435]: time="2025-09-05T23:53:19.193045865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.193691 containerd[1435]: time="2025-09-05T23:53:19.193584624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:53:19.194574 containerd[1435]: time="2025-09-05T23:53:19.194520422Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.196646 containerd[1435]: time="2025-09-05T23:53:19.196600859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:19.197377 containerd[1435]: time="2025-09-05T23:53:19.197224417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.610684388s" Sep 5 23:53:19.197377 containerd[1435]: time="2025-09-05T23:53:19.197335017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:53:19.199623 containerd[1435]: time="2025-09-05T23:53:19.199515093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:53:19.210889 containerd[1435]: time="2025-09-05T23:53:19.210851274Z" level=info msg="CreateContainer within sandbox \"792895fac18d800ecbf9425092482dbea113033b9e6c49bafb018b413de919c2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:53:19.226584 containerd[1435]: time="2025-09-05T23:53:19.226434527Z" level=info msg="CreateContainer within sandbox \"792895fac18d800ecbf9425092482dbea113033b9e6c49bafb018b413de919c2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"06fbe8d360777661be7a564f4bd8d9adf9e98a4905098de40207861a22c01882\"" Sep 5 23:53:19.227112 containerd[1435]: time="2025-09-05T23:53:19.227077086Z" level=info msg="StartContainer for \"06fbe8d360777661be7a564f4bd8d9adf9e98a4905098de40207861a22c01882\"" Sep 5 23:53:19.262669 systemd[1]: Started cri-containerd-06fbe8d360777661be7a564f4bd8d9adf9e98a4905098de40207861a22c01882.scope - libcontainer container 06fbe8d360777661be7a564f4bd8d9adf9e98a4905098de40207861a22c01882. Sep 5 23:53:19.294351 containerd[1435]: time="2025-09-05T23:53:19.294198449Z" level=info msg="StartContainer for \"06fbe8d360777661be7a564f4bd8d9adf9e98a4905098de40207861a22c01882\" returns successfully" Sep 5 23:53:19.516900 kubelet[2474]: E0905 23:53:19.516428 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxc2c" podUID="d08ad477-f492-4278-9581-c6fba1569e81" Sep 5 23:53:19.615043 kubelet[2474]: E0905 23:53:19.614939 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:19.626261 kubelet[2474]: I0905 23:53:19.626198 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c4586678-t4ng4" podStartSLOduration=1.01306401 podStartE2EDuration="2.626178033s" podCreationTimestamp="2025-09-05 23:53:17 +0000 UTC" firstStartedPulling="2025-09-05 23:53:17.585928791 +0000 UTC m=+20.146266988" lastFinishedPulling="2025-09-05 23:53:19.199042774 +0000 UTC m=+21.759381011" observedRunningTime="2025-09-05 23:53:19.626057354 +0000 UTC m=+22.186395591" watchObservedRunningTime="2025-09-05 23:53:19.626178033 +0000 UTC m=+22.186516270" Sep 5 23:53:19.687903 kubelet[2474]: E0905 23:53:19.687757 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.687903 kubelet[2474]: W0905 23:53:19.687784 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.687903 kubelet[2474]: E0905 23:53:19.687806 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.688262 kubelet[2474]: E0905 23:53:19.687985 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.688262 kubelet[2474]: W0905 23:53:19.687995 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.688262 kubelet[2474]: E0905 23:53:19.688005 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.688421 kubelet[2474]: E0905 23:53:19.688383 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.688421 kubelet[2474]: W0905 23:53:19.688397 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.688645 kubelet[2474]: E0905 23:53:19.688408 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.688809 kubelet[2474]: E0905 23:53:19.688794 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.688991 kubelet[2474]: W0905 23:53:19.688838 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.688991 kubelet[2474]: E0905 23:53:19.688852 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.689139 kubelet[2474]: E0905 23:53:19.689126 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.689270 kubelet[2474]: W0905 23:53:19.689198 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.689270 kubelet[2474]: E0905 23:53:19.689214 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.689496 kubelet[2474]: E0905 23:53:19.689483 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.689656 kubelet[2474]: W0905 23:53:19.689563 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.689656 kubelet[2474]: E0905 23:53:19.689582 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.689997 kubelet[2474]: E0905 23:53:19.689885 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.689997 kubelet[2474]: W0905 23:53:19.689899 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.689997 kubelet[2474]: E0905 23:53:19.689909 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.690187 kubelet[2474]: E0905 23:53:19.690174 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.690327 kubelet[2474]: W0905 23:53:19.690251 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.690327 kubelet[2474]: E0905 23:53:19.690267 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.690559 kubelet[2474]: E0905 23:53:19.690540 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.690758 kubelet[2474]: W0905 23:53:19.690624 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.690758 kubelet[2474]: E0905 23:53:19.690639 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.690906 kubelet[2474]: E0905 23:53:19.690893 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.690969 kubelet[2474]: W0905 23:53:19.690959 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.691048 kubelet[2474]: E0905 23:53:19.691017 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.691356 kubelet[2474]: E0905 23:53:19.691256 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.691356 kubelet[2474]: W0905 23:53:19.691273 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.691356 kubelet[2474]: E0905 23:53:19.691283 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.691534 kubelet[2474]: E0905 23:53:19.691520 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.691658 kubelet[2474]: W0905 23:53:19.691586 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.691658 kubelet[2474]: E0905 23:53:19.691602 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.691903 kubelet[2474]: E0905 23:53:19.691886 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.692094 kubelet[2474]: W0905 23:53:19.691969 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.692094 kubelet[2474]: E0905 23:53:19.691989 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.692243 kubelet[2474]: E0905 23:53:19.692229 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.692312 kubelet[2474]: W0905 23:53:19.692302 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.692453 kubelet[2474]: E0905 23:53:19.692355 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.692779 kubelet[2474]: E0905 23:53:19.692765 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.692918 kubelet[2474]: W0905 23:53:19.692843 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.692918 kubelet[2474]: E0905 23:53:19.692858 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.693850 kubelet[2474]: E0905 23:53:19.693833 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.693850 kubelet[2474]: W0905 23:53:19.693848 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.693963 kubelet[2474]: E0905 23:53:19.693861 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.694140 kubelet[2474]: E0905 23:53:19.694129 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.694140 kubelet[2474]: W0905 23:53:19.694140 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.694209 kubelet[2474]: E0905 23:53:19.694155 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.694382 kubelet[2474]: E0905 23:53:19.694366 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.694417 kubelet[2474]: W0905 23:53:19.694383 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.694417 kubelet[2474]: E0905 23:53:19.694399 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.694561 kubelet[2474]: E0905 23:53:19.694550 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.694561 kubelet[2474]: W0905 23:53:19.694561 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.694624 kubelet[2474]: E0905 23:53:19.694573 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.694723 kubelet[2474]: E0905 23:53:19.694714 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.694751 kubelet[2474]: W0905 23:53:19.694724 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.694751 kubelet[2474]: E0905 23:53:19.694734 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.694961 kubelet[2474]: E0905 23:53:19.694946 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.694961 kubelet[2474]: W0905 23:53:19.694960 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.695039 kubelet[2474]: E0905 23:53:19.694974 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.695488 kubelet[2474]: E0905 23:53:19.695327 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.695488 kubelet[2474]: W0905 23:53:19.695343 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.695488 kubelet[2474]: E0905 23:53:19.695359 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.695670 kubelet[2474]: E0905 23:53:19.695657 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.695732 kubelet[2474]: W0905 23:53:19.695721 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.695859 kubelet[2474]: E0905 23:53:19.695819 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.696081 kubelet[2474]: E0905 23:53:19.696065 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.696147 kubelet[2474]: W0905 23:53:19.696135 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.696277 kubelet[2474]: E0905 23:53:19.696208 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.696435 kubelet[2474]: E0905 23:53:19.696374 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.696435 kubelet[2474]: W0905 23:53:19.696386 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.696435 kubelet[2474]: E0905 23:53:19.696403 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.696617 kubelet[2474]: E0905 23:53:19.696602 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.696617 kubelet[2474]: W0905 23:53:19.696615 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.696669 kubelet[2474]: E0905 23:53:19.696630 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.696832 kubelet[2474]: E0905 23:53:19.696821 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.696832 kubelet[2474]: W0905 23:53:19.696832 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.696890 kubelet[2474]: E0905 23:53:19.696845 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.697077 kubelet[2474]: E0905 23:53:19.697069 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.697107 kubelet[2474]: W0905 23:53:19.697079 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.697107 kubelet[2474]: E0905 23:53:19.697094 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.697494 kubelet[2474]: E0905 23:53:19.697317 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.697494 kubelet[2474]: W0905 23:53:19.697329 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.697494 kubelet[2474]: E0905 23:53:19.697348 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.698214 kubelet[2474]: E0905 23:53:19.698170 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.698214 kubelet[2474]: W0905 23:53:19.698200 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.698214 kubelet[2474]: E0905 23:53:19.698220 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.698613 kubelet[2474]: E0905 23:53:19.698568 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.698613 kubelet[2474]: W0905 23:53:19.698585 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.698694 kubelet[2474]: E0905 23:53:19.698622 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.699494 kubelet[2474]: E0905 23:53:19.699344 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.699795 kubelet[2474]: W0905 23:53:19.699493 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.699795 kubelet[2474]: E0905 23:53:19.699560 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:19.699855 kubelet[2474]: E0905 23:53:19.699843 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:19.699877 kubelet[2474]: W0905 23:53:19.699857 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:19.699877 kubelet[2474]: E0905 23:53:19.699870 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:20.213246 containerd[1435]: time="2025-09-05T23:53:20.213195550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:20.214155 containerd[1435]: time="2025-09-05T23:53:20.213698029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:53:20.214761 containerd[1435]: time="2025-09-05T23:53:20.214732988Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:20.216702 containerd[1435]: time="2025-09-05T23:53:20.216668824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:20.220795 containerd[1435]: time="2025-09-05T23:53:20.220744418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.021187685s" Sep 5 23:53:20.220795 containerd[1435]: time="2025-09-05T23:53:20.220793817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:53:20.228391 containerd[1435]: time="2025-09-05T23:53:20.228345165Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:53:20.243516 containerd[1435]: time="2025-09-05T23:53:20.243426820Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f\"" Sep 5 23:53:20.244242 containerd[1435]: time="2025-09-05T23:53:20.244206179Z" level=info msg="StartContainer for \"dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f\"" Sep 5 23:53:20.288708 systemd[1]: Started cri-containerd-dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f.scope - libcontainer container dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f. Sep 5 23:53:20.312250 containerd[1435]: time="2025-09-05T23:53:20.312206905Z" level=info msg="StartContainer for \"dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f\" returns successfully" Sep 5 23:53:20.328398 systemd[1]: cri-containerd-dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f.scope: Deactivated successfully. Sep 5 23:53:20.432371 containerd[1435]: time="2025-09-05T23:53:20.429502150Z" level=info msg="shim disconnected" id=dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f namespace=k8s.io Sep 5 23:53:20.432371 containerd[1435]: time="2025-09-05T23:53:20.432352185Z" level=warning msg="cleaning up after shim disconnected" id=dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f namespace=k8s.io Sep 5 23:53:20.432371 containerd[1435]: time="2025-09-05T23:53:20.432377825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:53:20.452852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd1cd7487a8473d6572699b08582301323421f8db45c9416f8bcabec9082be5f-rootfs.mount: Deactivated successfully. Sep 5 23:53:20.621012 kubelet[2474]: I0905 23:53:20.620894 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:20.621408 kubelet[2474]: E0905 23:53:20.621276 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:20.622604 containerd[1435]: time="2025-09-05T23:53:20.622548549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:53:21.517525 kubelet[2474]: E0905 23:53:21.517452 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxc2c" podUID="d08ad477-f492-4278-9581-c6fba1569e81" Sep 5 23:53:22.518903 containerd[1435]: time="2025-09-05T23:53:22.518612124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.519442 containerd[1435]: time="2025-09-05T23:53:22.519338243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:53:22.520461 containerd[1435]: time="2025-09-05T23:53:22.520412361Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.522560 containerd[1435]: time="2025-09-05T23:53:22.522525078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.523512 containerd[1435]: time="2025-09-05T23:53:22.523306637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.900696888s" Sep 5 23:53:22.523512 containerd[1435]: time="2025-09-05T23:53:22.523348517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:53:22.525639 containerd[1435]: time="2025-09-05T23:53:22.525598593Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:53:22.539443 containerd[1435]: time="2025-09-05T23:53:22.539405332Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c\"" Sep 5 23:53:22.539959 containerd[1435]: time="2025-09-05T23:53:22.539803771Z" level=info msg="StartContainer for \"182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c\"" Sep 5 23:53:22.572626 systemd[1]: Started cri-containerd-182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c.scope - libcontainer container 182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c. Sep 5 23:53:22.598513 containerd[1435]: time="2025-09-05T23:53:22.598382601Z" level=info msg="StartContainer for \"182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c\" returns successfully" Sep 5 23:53:23.110722 containerd[1435]: time="2025-09-05T23:53:23.110640260Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:53:23.115349 systemd[1]: cri-containerd-182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c.scope: Deactivated successfully. Sep 5 23:53:23.143994 kubelet[2474]: I0905 23:53:23.143966 2474 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 23:53:23.502848 systemd[1]: Created slice kubepods-besteffort-podd49fff1f_0b54_40b8_bccc_fc99d8d97bff.slice - libcontainer container kubepods-besteffort-podd49fff1f_0b54_40b8_bccc_fc99d8d97bff.slice. Sep 5 23:53:23.509840 systemd[1]: Created slice kubepods-besteffort-pod939a9432_9bb9_45cc_becf_b30fdb72d8af.slice - libcontainer container kubepods-besteffort-pod939a9432_9bb9_45cc_becf_b30fdb72d8af.slice. Sep 5 23:53:23.515047 systemd[1]: Created slice kubepods-besteffort-pod5e413f16_c817_4175_8b6a_241c2e2a0ced.slice - libcontainer container kubepods-besteffort-pod5e413f16_c817_4175_8b6a_241c2e2a0ced.slice. Sep 5 23:53:23.521826 kubelet[2474]: I0905 23:53:23.521708 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r9dr\" (UniqueName: \"kubernetes.io/projected/fcbb6ba6-1660-45d6-89cb-7cb16d9b9272-kube-api-access-8r9dr\") pod \"calico-apiserver-857bdbd65-rfjbb\" (UID: \"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272\") " pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" Sep 5 23:53:23.521826 kubelet[2474]: I0905 23:53:23.521754 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zzm8\" (UniqueName: \"kubernetes.io/projected/5e413f16-c817-4175-8b6a-241c2e2a0ced-kube-api-access-8zzm8\") pod \"calico-kube-controllers-6db9444479-4dqph\" (UID: \"5e413f16-c817-4175-8b6a-241c2e2a0ced\") " pod="calico-system/calico-kube-controllers-6db9444479-4dqph" Sep 5 23:53:23.521826 kubelet[2474]: I0905 23:53:23.521775 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcbb6ba6-1660-45d6-89cb-7cb16d9b9272-calico-apiserver-certs\") pod \"calico-apiserver-857bdbd65-rfjbb\" (UID: \"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272\") " pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" Sep 5 23:53:23.522158 systemd[1]: Created slice kubepods-besteffort-pod35cf1f2e_da60_43ff_8f52_05621e07097d.slice - libcontainer container kubepods-besteffort-pod35cf1f2e_da60_43ff_8f52_05621e07097d.slice. Sep 5 23:53:23.526640 kubelet[2474]: I0905 23:53:23.521792 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdgw\" (UniqueName: \"kubernetes.io/projected/35cf1f2e-da60-43ff-8f52-05621e07097d-kube-api-access-9kdgw\") pod \"goldmane-7988f88666-lrvz2\" (UID: \"35cf1f2e-da60-43ff-8f52-05621e07097d\") " pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.526723 kubelet[2474]: I0905 23:53:23.526662 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-ca-bundle\") pod \"whisker-556c969bf5-mkv7h\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " pod="calico-system/whisker-556c969bf5-mkv7h" Sep 5 23:53:23.526723 kubelet[2474]: I0905 23:53:23.526685 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e413f16-c817-4175-8b6a-241c2e2a0ced-tigera-ca-bundle\") pod \"calico-kube-controllers-6db9444479-4dqph\" (UID: \"5e413f16-c817-4175-8b6a-241c2e2a0ced\") " pod="calico-system/calico-kube-controllers-6db9444479-4dqph" Sep 5 23:53:23.526723 kubelet[2474]: I0905 23:53:23.526702 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35cf1f2e-da60-43ff-8f52-05621e07097d-goldmane-ca-bundle\") pod \"goldmane-7988f88666-lrvz2\" (UID: \"35cf1f2e-da60-43ff-8f52-05621e07097d\") " pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.526723 kubelet[2474]: I0905 23:53:23.526717 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24814833-7fb2-4299-991a-3537be9695bf-config-volume\") pod \"coredns-7c65d6cfc9-6k77q\" (UID: \"24814833-7fb2-4299-991a-3537be9695bf\") " pod="kube-system/coredns-7c65d6cfc9-6k77q" Sep 5 23:53:23.526823 kubelet[2474]: I0905 23:53:23.526738 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d49fff1f-0b54-40b8-bccc-fc99d8d97bff-calico-apiserver-certs\") pod \"calico-apiserver-857bdbd65-8z7q2\" (UID: \"d49fff1f-0b54-40b8-bccc-fc99d8d97bff\") " pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" Sep 5 23:53:23.526823 kubelet[2474]: I0905 23:53:23.526753 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35cf1f2e-da60-43ff-8f52-05621e07097d-config\") pod \"goldmane-7988f88666-lrvz2\" (UID: \"35cf1f2e-da60-43ff-8f52-05621e07097d\") " pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.526823 kubelet[2474]: I0905 23:53:23.526770 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6c4\" (UniqueName: \"kubernetes.io/projected/24814833-7fb2-4299-991a-3537be9695bf-kube-api-access-dt6c4\") pod \"coredns-7c65d6cfc9-6k77q\" (UID: \"24814833-7fb2-4299-991a-3537be9695bf\") " pod="kube-system/coredns-7c65d6cfc9-6k77q" Sep 5 23:53:23.526893 kubelet[2474]: I0905 23:53:23.526830 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzgq\" (UniqueName: \"kubernetes.io/projected/d49fff1f-0b54-40b8-bccc-fc99d8d97bff-kube-api-access-hgzgq\") pod \"calico-apiserver-857bdbd65-8z7q2\" (UID: \"d49fff1f-0b54-40b8-bccc-fc99d8d97bff\") " pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" Sep 5 23:53:23.526893 kubelet[2474]: I0905 23:53:23.526860 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6zh\" (UniqueName: \"kubernetes.io/projected/b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3-kube-api-access-pb6zh\") pod \"coredns-7c65d6cfc9-hc2b4\" (UID: \"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3\") " pod="kube-system/coredns-7c65d6cfc9-hc2b4" Sep 5 23:53:23.526945 kubelet[2474]: I0905 23:53:23.526904 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3-config-volume\") pod \"coredns-7c65d6cfc9-hc2b4\" (UID: \"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3\") " pod="kube-system/coredns-7c65d6cfc9-hc2b4" Sep 5 23:53:23.526972 kubelet[2474]: I0905 23:53:23.526944 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-backend-key-pair\") pod \"whisker-556c969bf5-mkv7h\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " pod="calico-system/whisker-556c969bf5-mkv7h" Sep 5 23:53:23.526972 kubelet[2474]: I0905 23:53:23.526963 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/35cf1f2e-da60-43ff-8f52-05621e07097d-goldmane-key-pair\") pod \"goldmane-7988f88666-lrvz2\" (UID: \"35cf1f2e-da60-43ff-8f52-05621e07097d\") " pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.527029 kubelet[2474]: I0905 23:53:23.526982 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42bl\" (UniqueName: \"kubernetes.io/projected/939a9432-9bb9-45cc-becf-b30fdb72d8af-kube-api-access-c42bl\") pod \"whisker-556c969bf5-mkv7h\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " pod="calico-system/whisker-556c969bf5-mkv7h" Sep 5 23:53:23.528021 containerd[1435]: time="2025-09-05T23:53:23.527946642Z" level=info msg="shim disconnected" id=182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c namespace=k8s.io Sep 5 23:53:23.528021 containerd[1435]: time="2025-09-05T23:53:23.528002602Z" level=warning msg="cleaning up after shim disconnected" id=182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c namespace=k8s.io Sep 5 23:53:23.528021 containerd[1435]: time="2025-09-05T23:53:23.528021482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:53:23.534226 systemd[1]: Created slice kubepods-burstable-pod24814833_7fb2_4299_991a_3537be9695bf.slice - libcontainer container kubepods-burstable-pod24814833_7fb2_4299_991a_3537be9695bf.slice. Sep 5 23:53:23.535936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-182be4ed258bf2555c993c1c502011ecd3837eb63c62dab676cb0bab197ab32c-rootfs.mount: Deactivated successfully. Sep 5 23:53:23.544739 systemd[1]: Created slice kubepods-burstable-podb824ed2d_3ec1_40d0_9ed1_af7f8cdb6fb3.slice - libcontainer container kubepods-burstable-podb824ed2d_3ec1_40d0_9ed1_af7f8cdb6fb3.slice. Sep 5 23:53:23.557021 systemd[1]: Created slice kubepods-besteffort-podfcbb6ba6_1660_45d6_89cb_7cb16d9b9272.slice - libcontainer container kubepods-besteffort-podfcbb6ba6_1660_45d6_89cb_7cb16d9b9272.slice. Sep 5 23:53:23.561443 systemd[1]: Created slice kubepods-besteffort-podd08ad477_f492_4278_9581_c6fba1569e81.slice - libcontainer container kubepods-besteffort-podd08ad477_f492_4278_9581_c6fba1569e81.slice. Sep 5 23:53:23.563826 containerd[1435]: time="2025-09-05T23:53:23.563786069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxc2c,Uid:d08ad477-f492-4278-9581-c6fba1569e81,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:23.659179 containerd[1435]: time="2025-09-05T23:53:23.659054688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:53:23.719090 containerd[1435]: time="2025-09-05T23:53:23.719024519Z" level=error msg="Failed to destroy network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.719400 containerd[1435]: time="2025-09-05T23:53:23.719372079Z" level=error msg="encountered an error cleaning up failed sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.719453 containerd[1435]: time="2025-09-05T23:53:23.719432239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxc2c,Uid:d08ad477-f492-4278-9581-c6fba1569e81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.719723 kubelet[2474]: E0905 23:53:23.719682 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.721786 kubelet[2474]: E0905 23:53:23.721587 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:23.721786 kubelet[2474]: E0905 23:53:23.721627 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxc2c" Sep 5 23:53:23.721786 kubelet[2474]: E0905 23:53:23.721686 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nxc2c_calico-system(d08ad477-f492-4278-9581-c6fba1569e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nxc2c_calico-system(d08ad477-f492-4278-9581-c6fba1569e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nxc2c" podUID="d08ad477-f492-4278-9581-c6fba1569e81" Sep 5 23:53:23.806965 containerd[1435]: time="2025-09-05T23:53:23.806829109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-8z7q2,Uid:d49fff1f-0b54-40b8-bccc-fc99d8d97bff,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:53:23.813000 containerd[1435]: time="2025-09-05T23:53:23.812915060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556c969bf5-mkv7h,Uid:939a9432-9bb9-45cc-becf-b30fdb72d8af,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:23.819901 containerd[1435]: time="2025-09-05T23:53:23.819860330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db9444479-4dqph,Uid:5e413f16-c817-4175-8b6a-241c2e2a0ced,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:23.829130 containerd[1435]: time="2025-09-05T23:53:23.829049597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lrvz2,Uid:35cf1f2e-da60-43ff-8f52-05621e07097d,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:23.841981 kubelet[2474]: E0905 23:53:23.841940 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:23.843752 containerd[1435]: time="2025-09-05T23:53:23.843680815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6k77q,Uid:24814833-7fb2-4299-991a-3537be9695bf,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:23.850920 kubelet[2474]: E0905 23:53:23.850512 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:23.853283 containerd[1435]: time="2025-09-05T23:53:23.851233604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc2b4,Uid:b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:23.861182 containerd[1435]: time="2025-09-05T23:53:23.861138589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-rfjbb,Uid:fcbb6ba6-1660-45d6-89cb-7cb16d9b9272,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:53:23.920509 containerd[1435]: time="2025-09-05T23:53:23.920422541Z" level=error msg="Failed to destroy network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.921206 containerd[1435]: time="2025-09-05T23:53:23.921174300Z" level=error msg="encountered an error cleaning up failed sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.921377 containerd[1435]: time="2025-09-05T23:53:23.921352620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-8z7q2,Uid:d49fff1f-0b54-40b8-bccc-fc99d8d97bff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.921736 kubelet[2474]: E0905 23:53:23.921693 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.921817 kubelet[2474]: E0905 23:53:23.921754 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" Sep 5 23:53:23.921817 kubelet[2474]: E0905 23:53:23.921774 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" Sep 5 23:53:23.921882 kubelet[2474]: E0905 23:53:23.921814 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857bdbd65-8z7q2_calico-apiserver(d49fff1f-0b54-40b8-bccc-fc99d8d97bff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857bdbd65-8z7q2_calico-apiserver(d49fff1f-0b54-40b8-bccc-fc99d8d97bff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" podUID="d49fff1f-0b54-40b8-bccc-fc99d8d97bff" Sep 5 23:53:23.930318 containerd[1435]: time="2025-09-05T23:53:23.930263887Z" level=error msg="Failed to destroy network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.931419 containerd[1435]: time="2025-09-05T23:53:23.931381285Z" level=error msg="encountered an error cleaning up failed sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.931917 containerd[1435]: time="2025-09-05T23:53:23.931878924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556c969bf5-mkv7h,Uid:939a9432-9bb9-45cc-becf-b30fdb72d8af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.932298 kubelet[2474]: E0905 23:53:23.932257 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.932366 kubelet[2474]: E0905 23:53:23.932314 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-556c969bf5-mkv7h" Sep 5 23:53:23.932366 kubelet[2474]: E0905 23:53:23.932334 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-556c969bf5-mkv7h" Sep 5 23:53:23.932417 kubelet[2474]: E0905 23:53:23.932375 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-556c969bf5-mkv7h_calico-system(939a9432-9bb9-45cc-becf-b30fdb72d8af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-556c969bf5-mkv7h_calico-system(939a9432-9bb9-45cc-becf-b30fdb72d8af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-556c969bf5-mkv7h" podUID="939a9432-9bb9-45cc-becf-b30fdb72d8af" Sep 5 23:53:23.942317 containerd[1435]: time="2025-09-05T23:53:23.942267789Z" level=error msg="Failed to destroy network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.942956 containerd[1435]: time="2025-09-05T23:53:23.942620548Z" level=error msg="encountered an error cleaning up failed sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.942956 containerd[1435]: time="2025-09-05T23:53:23.942677428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db9444479-4dqph,Uid:5e413f16-c817-4175-8b6a-241c2e2a0ced,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.943084 kubelet[2474]: E0905 23:53:23.942854 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.943084 kubelet[2474]: E0905 23:53:23.942906 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db9444479-4dqph" Sep 5 23:53:23.943084 kubelet[2474]: E0905 23:53:23.942927 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db9444479-4dqph" Sep 5 23:53:23.943176 kubelet[2474]: E0905 23:53:23.942960 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6db9444479-4dqph_calico-system(5e413f16-c817-4175-8b6a-241c2e2a0ced)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6db9444479-4dqph_calico-system(5e413f16-c817-4175-8b6a-241c2e2a0ced)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db9444479-4dqph" podUID="5e413f16-c817-4175-8b6a-241c2e2a0ced" Sep 5 23:53:23.970768 containerd[1435]: time="2025-09-05T23:53:23.970715267Z" level=error msg="Failed to destroy network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.971571 containerd[1435]: time="2025-09-05T23:53:23.971534466Z" level=error msg="encountered an error cleaning up failed sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.971640 containerd[1435]: time="2025-09-05T23:53:23.971601866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6k77q,Uid:24814833-7fb2-4299-991a-3537be9695bf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.971870 kubelet[2474]: E0905 23:53:23.971823 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.971920 kubelet[2474]: E0905 23:53:23.971883 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6k77q" Sep 5 23:53:23.971920 kubelet[2474]: E0905 23:53:23.971901 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6k77q" Sep 5 23:53:23.971969 kubelet[2474]: E0905 23:53:23.971947 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6k77q_kube-system(24814833-7fb2-4299-991a-3537be9695bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6k77q_kube-system(24814833-7fb2-4299-991a-3537be9695bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6k77q" podUID="24814833-7fb2-4299-991a-3537be9695bf" Sep 5 23:53:23.975977 containerd[1435]: time="2025-09-05T23:53:23.975928739Z" level=error msg="Failed to destroy network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.976849 containerd[1435]: time="2025-09-05T23:53:23.976108379Z" level=error msg="Failed to destroy network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.976849 containerd[1435]: time="2025-09-05T23:53:23.976286579Z" level=error msg="encountered an error cleaning up failed sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.976849 containerd[1435]: time="2025-09-05T23:53:23.976333659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc2b4,Uid:b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.976849 containerd[1435]: time="2025-09-05T23:53:23.976407858Z" level=error msg="encountered an error cleaning up failed sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.976849 containerd[1435]: time="2025-09-05T23:53:23.976460338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lrvz2,Uid:35cf1f2e-da60-43ff-8f52-05621e07097d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.977039 kubelet[2474]: E0905 23:53:23.976510 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.977039 kubelet[2474]: E0905 23:53:23.976570 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hc2b4" Sep 5 23:53:23.977039 kubelet[2474]: E0905 23:53:23.976588 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hc2b4" Sep 5 23:53:23.977039 kubelet[2474]: E0905 23:53:23.976669 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.977142 kubelet[2474]: E0905 23:53:23.976707 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.977142 kubelet[2474]: E0905 23:53:23.976724 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-lrvz2" Sep 5 23:53:23.977142 kubelet[2474]: E0905 23:53:23.976758 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-lrvz2_calico-system(35cf1f2e-da60-43ff-8f52-05621e07097d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-lrvz2_calico-system(35cf1f2e-da60-43ff-8f52-05621e07097d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-lrvz2" podUID="35cf1f2e-da60-43ff-8f52-05621e07097d" Sep 5 23:53:23.977974 kubelet[2474]: E0905 23:53:23.977419 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hc2b4_kube-system(b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hc2b4_kube-system(b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hc2b4" podUID="b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3" Sep 5 23:53:23.978610 containerd[1435]: time="2025-09-05T23:53:23.978574815Z" level=error msg="Failed to destroy network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.978861 containerd[1435]: time="2025-09-05T23:53:23.978837095Z" level=error msg="encountered an error cleaning up failed sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.978909 containerd[1435]: time="2025-09-05T23:53:23.978885175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-rfjbb,Uid:fcbb6ba6-1660-45d6-89cb-7cb16d9b9272,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.979092 kubelet[2474]: E0905 23:53:23.979067 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:23.979143 kubelet[2474]: E0905 23:53:23.979105 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" Sep 5 23:53:23.979143 kubelet[2474]: E0905 23:53:23.979121 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" Sep 5 23:53:23.979190 kubelet[2474]: E0905 23:53:23.979152 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-857bdbd65-rfjbb_calico-apiserver(fcbb6ba6-1660-45d6-89cb-7cb16d9b9272)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-857bdbd65-rfjbb_calico-apiserver(fcbb6ba6-1660-45d6-89cb-7cb16d9b9272)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" podUID="fcbb6ba6-1660-45d6-89cb-7cb16d9b9272" Sep 5 23:53:24.564538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63-shm.mount: Deactivated successfully. Sep 5 23:53:24.661320 kubelet[2474]: I0905 23:53:24.661130 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:24.664818 containerd[1435]: time="2025-09-05T23:53:24.663755317Z" level=info msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" Sep 5 23:53:24.664818 containerd[1435]: time="2025-09-05T23:53:24.663925837Z" level=info msg="Ensure that sandbox f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b in task-service has been cleanup successfully" Sep 5 23:53:24.666537 kubelet[2474]: I0905 23:53:24.665957 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:24.666604 containerd[1435]: time="2025-09-05T23:53:24.666549593Z" level=info msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" Sep 5 23:53:24.667072 containerd[1435]: time="2025-09-05T23:53:24.666680473Z" level=info msg="Ensure that sandbox 099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e in task-service has been cleanup successfully" Sep 5 23:53:24.669103 kubelet[2474]: I0905 23:53:24.668258 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:24.670189 containerd[1435]: time="2025-09-05T23:53:24.669749188Z" level=info msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" Sep 5 23:53:24.670189 containerd[1435]: time="2025-09-05T23:53:24.669891428Z" level=info msg="Ensure that sandbox e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3 in task-service has been cleanup successfully" Sep 5 23:53:24.674912 kubelet[2474]: I0905 23:53:24.673997 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:24.675309 containerd[1435]: time="2025-09-05T23:53:24.674522341Z" level=info msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" Sep 5 23:53:24.675309 containerd[1435]: time="2025-09-05T23:53:24.674664021Z" level=info msg="Ensure that sandbox 2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6 in task-service has been cleanup successfully" Sep 5 23:53:24.679661 kubelet[2474]: I0905 23:53:24.679580 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:24.680855 containerd[1435]: time="2025-09-05T23:53:24.680663293Z" level=info msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" Sep 5 23:53:24.680929 containerd[1435]: time="2025-09-05T23:53:24.680863172Z" level=info msg="Ensure that sandbox 09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b in task-service has been cleanup successfully" Sep 5 23:53:24.685837 kubelet[2474]: I0905 23:53:24.684066 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:24.685909 containerd[1435]: time="2025-09-05T23:53:24.684642887Z" level=info msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" Sep 5 23:53:24.685909 containerd[1435]: time="2025-09-05T23:53:24.684798647Z" level=info msg="Ensure that sandbox 99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63 in task-service has been cleanup successfully" Sep 5 23:53:24.688944 kubelet[2474]: I0905 23:53:24.688913 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:24.689695 containerd[1435]: time="2025-09-05T23:53:24.689373320Z" level=info msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" Sep 5 23:53:24.689695 containerd[1435]: time="2025-09-05T23:53:24.689533960Z" level=info msg="Ensure that sandbox cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c in task-service has been cleanup successfully" Sep 5 23:53:24.693532 kubelet[2474]: I0905 23:53:24.693503 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:24.694590 containerd[1435]: time="2025-09-05T23:53:24.694553313Z" level=info msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" Sep 5 23:53:24.694918 containerd[1435]: time="2025-09-05T23:53:24.694882192Z" level=info msg="Ensure that sandbox 2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692 in task-service has been cleanup successfully" Sep 5 23:53:24.739324 containerd[1435]: time="2025-09-05T23:53:24.739256169Z" level=error msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" failed" error="failed to destroy network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.745218 containerd[1435]: time="2025-09-05T23:53:24.745160721Z" level=error msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" failed" error="failed to destroy network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.745403 kubelet[2474]: E0905 23:53:24.739496 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:24.745454 kubelet[2474]: E0905 23:53:24.745393 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:24.745525 kubelet[2474]: E0905 23:53:24.745424 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b"} Sep 5 23:53:24.745566 kubelet[2474]: E0905 23:53:24.745522 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"939a9432-9bb9-45cc-becf-b30fdb72d8af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.745566 kubelet[2474]: E0905 23:53:24.745544 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"939a9432-9bb9-45cc-becf-b30fdb72d8af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-556c969bf5-mkv7h" podUID="939a9432-9bb9-45cc-becf-b30fdb72d8af" Sep 5 23:53:24.745566 kubelet[2474]: E0905 23:53:24.745434 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e"} Sep 5 23:53:24.745683 kubelet[2474]: E0905 23:53:24.745582 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24814833-7fb2-4299-991a-3537be9695bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.745683 kubelet[2474]: E0905 23:53:24.745600 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24814833-7fb2-4299-991a-3537be9695bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6k77q" podUID="24814833-7fb2-4299-991a-3537be9695bf" Sep 5 23:53:24.749853 containerd[1435]: time="2025-09-05T23:53:24.749806074Z" level=error msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" failed" error="failed to destroy network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.750223 kubelet[2474]: E0905 23:53:24.750174 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:24.750283 kubelet[2474]: E0905 23:53:24.750233 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63"} Sep 5 23:53:24.750283 kubelet[2474]: E0905 23:53:24.750268 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d08ad477-f492-4278-9581-c6fba1569e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.750556 kubelet[2474]: E0905 23:53:24.750292 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d08ad477-f492-4278-9581-c6fba1569e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nxc2c" podUID="d08ad477-f492-4278-9581-c6fba1569e81" Sep 5 23:53:24.760836 containerd[1435]: time="2025-09-05T23:53:24.760390699Z" level=error msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" failed" error="failed to destroy network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.761956 kubelet[2474]: E0905 23:53:24.761818 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:24.761956 kubelet[2474]: E0905 23:53:24.761868 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6"} Sep 5 23:53:24.761956 kubelet[2474]: E0905 23:53:24.761899 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35cf1f2e-da60-43ff-8f52-05621e07097d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.761956 kubelet[2474]: E0905 23:53:24.761920 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35cf1f2e-da60-43ff-8f52-05621e07097d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-lrvz2" podUID="35cf1f2e-da60-43ff-8f52-05621e07097d" Sep 5 23:53:24.768544 containerd[1435]: time="2025-09-05T23:53:24.763619334Z" level=error msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" failed" error="failed to destroy network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.768721 containerd[1435]: time="2025-09-05T23:53:24.765196092Z" level=error msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" failed" error="failed to destroy network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.768832 containerd[1435]: time="2025-09-05T23:53:24.766081411Z" level=error msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" failed" error="failed to destroy network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.769102 kubelet[2474]: E0905 23:53:24.768959 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:24.769102 kubelet[2474]: E0905 23:53:24.768976 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:24.769102 kubelet[2474]: E0905 23:53:24.769018 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b"} Sep 5 23:53:24.769102 kubelet[2474]: E0905 23:53:24.769024 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c"} Sep 5 23:53:24.769102 kubelet[2474]: E0905 23:53:24.769052 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e413f16-c817-4175-8b6a-241c2e2a0ced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.769273 kubelet[2474]: E0905 23:53:24.769055 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d49fff1f-0b54-40b8-bccc-fc99d8d97bff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.769273 kubelet[2474]: E0905 23:53:24.768966 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:24.769273 kubelet[2474]: E0905 23:53:24.769075 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e413f16-c817-4175-8b6a-241c2e2a0ced\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db9444479-4dqph" podUID="5e413f16-c817-4175-8b6a-241c2e2a0ced" Sep 5 23:53:24.769273 kubelet[2474]: E0905 23:53:24.769093 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692"} Sep 5 23:53:24.769385 kubelet[2474]: E0905 23:53:24.769122 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.769385 kubelet[2474]: E0905 23:53:24.769149 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" podUID="fcbb6ba6-1660-45d6-89cb-7cb16d9b9272" Sep 5 23:53:24.769385 kubelet[2474]: E0905 23:53:24.769093 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d49fff1f-0b54-40b8-bccc-fc99d8d97bff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" podUID="d49fff1f-0b54-40b8-bccc-fc99d8d97bff" Sep 5 23:53:24.776237 containerd[1435]: time="2025-09-05T23:53:24.776142197Z" level=error msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" failed" error="failed to destroy network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.777272 kubelet[2474]: E0905 23:53:24.777156 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:24.777272 kubelet[2474]: E0905 23:53:24.777197 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3"} Sep 5 23:53:24.777272 kubelet[2474]: E0905 23:53:24.777227 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.777272 kubelet[2474]: E0905 23:53:24.777245 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hc2b4" podUID="b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3" Sep 5 23:53:27.818691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949656458.mount: Deactivated successfully. Sep 5 23:53:27.859672 containerd[1435]: time="2025-09-05T23:53:27.859575470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:27.860914 containerd[1435]: time="2025-09-05T23:53:27.860871429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:53:27.864175 containerd[1435]: time="2025-09-05T23:53:27.864141264Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:27.866434 containerd[1435]: time="2025-09-05T23:53:27.866396541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:27.867342 containerd[1435]: time="2025-09-05T23:53:27.867170020Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.208073052s" Sep 5 23:53:27.867342 containerd[1435]: time="2025-09-05T23:53:27.867202660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:53:27.878409 containerd[1435]: time="2025-09-05T23:53:27.878271686Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:53:27.902825 containerd[1435]: time="2025-09-05T23:53:27.902770495Z" level=info msg="CreateContainer within sandbox \"ea5c11af4accbd1e1491c2fcbdf8c8e4980ab2c5d6c3ca73f9aff95ee03fd34b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ddaf0e040e25f8cb3b39c4fa966e581b0bcdba14a934308882a1594d7dd2bbff\"" Sep 5 23:53:27.904853 containerd[1435]: time="2025-09-05T23:53:27.904764252Z" level=info msg="StartContainer for \"ddaf0e040e25f8cb3b39c4fa966e581b0bcdba14a934308882a1594d7dd2bbff\"" Sep 5 23:53:27.952731 systemd[1]: Started cri-containerd-ddaf0e040e25f8cb3b39c4fa966e581b0bcdba14a934308882a1594d7dd2bbff.scope - libcontainer container ddaf0e040e25f8cb3b39c4fa966e581b0bcdba14a934308882a1594d7dd2bbff. Sep 5 23:53:27.981627 containerd[1435]: time="2025-09-05T23:53:27.981268754Z" level=info msg="StartContainer for \"ddaf0e040e25f8cb3b39c4fa966e581b0bcdba14a934308882a1594d7dd2bbff\" returns successfully" Sep 5 23:53:28.123113 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:53:28.123232 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:53:28.371223 containerd[1435]: time="2025-09-05T23:53:28.370895309Z" level=info msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.469 [INFO][3770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.471 [INFO][3770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" iface="eth0" netns="/var/run/netns/cni-e5fee3d7-9c70-9061-00e1-26e5ec95280e" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.473 [INFO][3770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" iface="eth0" netns="/var/run/netns/cni-e5fee3d7-9c70-9061-00e1-26e5ec95280e" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.474 [INFO][3770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" iface="eth0" netns="/var/run/netns/cni-e5fee3d7-9c70-9061-00e1-26e5ec95280e" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.474 [INFO][3770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.474 [INFO][3770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.539 [INFO][3780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.539 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.539 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.550 [WARNING][3780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.550 [INFO][3780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.552 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:28.558498 containerd[1435]: 2025-09-05 23:53:28.555 [INFO][3770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:28.558914 containerd[1435]: time="2025-09-05T23:53:28.558633756Z" level=info msg="TearDown network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" successfully" Sep 5 23:53:28.558914 containerd[1435]: time="2025-09-05T23:53:28.558663916Z" level=info msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" returns successfully" Sep 5 23:53:28.722349 kubelet[2474]: I0905 23:53:28.722279 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sfgd9" podStartSLOduration=1.688750854 podStartE2EDuration="11.722261073s" podCreationTimestamp="2025-09-05 23:53:17 +0000 UTC" firstStartedPulling="2025-09-05 23:53:17.834376161 +0000 UTC m=+20.394714398" lastFinishedPulling="2025-09-05 23:53:27.86788642 +0000 UTC m=+30.428224617" observedRunningTime="2025-09-05 23:53:28.721015194 +0000 UTC m=+31.281353471" watchObservedRunningTime="2025-09-05 23:53:28.722261073 +0000 UTC m=+31.282599270" Sep 5 23:53:28.761864 kubelet[2474]: I0905 23:53:28.761790 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c42bl\" (UniqueName: \"kubernetes.io/projected/939a9432-9bb9-45cc-becf-b30fdb72d8af-kube-api-access-c42bl\") pod \"939a9432-9bb9-45cc-becf-b30fdb72d8af\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " Sep 5 23:53:28.761864 kubelet[2474]: I0905 23:53:28.761843 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-ca-bundle\") pod \"939a9432-9bb9-45cc-becf-b30fdb72d8af\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " Sep 5 23:53:28.762480 kubelet[2474]: I0905 23:53:28.762266 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-backend-key-pair\") pod \"939a9432-9bb9-45cc-becf-b30fdb72d8af\" (UID: \"939a9432-9bb9-45cc-becf-b30fdb72d8af\") " Sep 5 23:53:28.768822 kubelet[2474]: I0905 23:53:28.768714 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "939a9432-9bb9-45cc-becf-b30fdb72d8af" (UID: "939a9432-9bb9-45cc-becf-b30fdb72d8af"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 23:53:28.772424 kubelet[2474]: I0905 23:53:28.772364 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939a9432-9bb9-45cc-becf-b30fdb72d8af-kube-api-access-c42bl" (OuterVolumeSpecName: "kube-api-access-c42bl") pod "939a9432-9bb9-45cc-becf-b30fdb72d8af" (UID: "939a9432-9bb9-45cc-becf-b30fdb72d8af"). InnerVolumeSpecName "kube-api-access-c42bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 23:53:28.781703 kubelet[2474]: I0905 23:53:28.781634 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "939a9432-9bb9-45cc-becf-b30fdb72d8af" (UID: "939a9432-9bb9-45cc-becf-b30fdb72d8af"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 23:53:28.819885 systemd[1]: run-netns-cni\x2de5fee3d7\x2d9c70\x2d9061\x2d00e1\x2d26e5ec95280e.mount: Deactivated successfully. Sep 5 23:53:28.819993 systemd[1]: var-lib-kubelet-pods-939a9432\x2d9bb9\x2d45cc\x2dbecf\x2db30fdb72d8af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc42bl.mount: Deactivated successfully. Sep 5 23:53:28.820062 systemd[1]: var-lib-kubelet-pods-939a9432\x2d9bb9\x2d45cc\x2dbecf\x2db30fdb72d8af-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:53:28.864225 kubelet[2474]: I0905 23:53:28.864169 2474 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 23:53:28.864225 kubelet[2474]: I0905 23:53:28.864209 2474 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/939a9432-9bb9-45cc-becf-b30fdb72d8af-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 23:53:28.864225 kubelet[2474]: I0905 23:53:28.864220 2474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c42bl\" (UniqueName: \"kubernetes.io/projected/939a9432-9bb9-45cc-becf-b30fdb72d8af-kube-api-access-c42bl\") on node \"localhost\" DevicePath \"\"" Sep 5 23:53:29.007742 systemd[1]: Removed slice kubepods-besteffort-pod939a9432_9bb9_45cc_becf_b30fdb72d8af.slice - libcontainer container kubepods-besteffort-pod939a9432_9bb9_45cc_becf_b30fdb72d8af.slice. Sep 5 23:53:29.098446 systemd[1]: Created slice kubepods-besteffort-poddb624e94_add5_4a50_a526_406e280c364c.slice - libcontainer container kubepods-besteffort-poddb624e94_add5_4a50_a526_406e280c364c.slice. Sep 5 23:53:29.266533 kubelet[2474]: I0905 23:53:29.266396 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db624e94-add5-4a50-a526-406e280c364c-whisker-backend-key-pair\") pod \"whisker-86949fc78-xrglm\" (UID: \"db624e94-add5-4a50-a526-406e280c364c\") " pod="calico-system/whisker-86949fc78-xrglm" Sep 5 23:53:29.266533 kubelet[2474]: I0905 23:53:29.266445 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db624e94-add5-4a50-a526-406e280c364c-whisker-ca-bundle\") pod \"whisker-86949fc78-xrglm\" (UID: \"db624e94-add5-4a50-a526-406e280c364c\") " pod="calico-system/whisker-86949fc78-xrglm" Sep 5 23:53:29.266533 kubelet[2474]: I0905 23:53:29.266486 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrtf\" (UniqueName: \"kubernetes.io/projected/db624e94-add5-4a50-a526-406e280c364c-kube-api-access-xcrtf\") pod \"whisker-86949fc78-xrglm\" (UID: \"db624e94-add5-4a50-a526-406e280c364c\") " pod="calico-system/whisker-86949fc78-xrglm" Sep 5 23:53:29.403338 containerd[1435]: time="2025-09-05T23:53:29.403275603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86949fc78-xrglm,Uid:db624e94-add5-4a50-a526-406e280c364c,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:29.521225 kubelet[2474]: I0905 23:53:29.519241 2474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939a9432-9bb9-45cc-becf-b30fdb72d8af" path="/var/lib/kubelet/pods/939a9432-9bb9-45cc-becf-b30fdb72d8af/volumes" Sep 5 23:53:29.546779 systemd-networkd[1380]: caliaea0bbb390d: Link UP Sep 5 23:53:29.547741 systemd-networkd[1380]: caliaea0bbb390d: Gained carrier Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.444 [INFO][3803] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.461 [INFO][3803] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86949fc78--xrglm-eth0 whisker-86949fc78- calico-system db624e94-add5-4a50-a526-406e280c364c 888 0 2025-09-05 23:53:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86949fc78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86949fc78-xrglm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaea0bbb390d [] [] }} ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.462 [INFO][3803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.486 [INFO][3817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" HandleID="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Workload="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.486 [INFO][3817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" HandleID="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Workload="localhost-k8s-whisker--86949fc78--xrglm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86949fc78-xrglm", "timestamp":"2025-09-05 23:53:29.486028063 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.486 [INFO][3817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.486 [INFO][3817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.486 [INFO][3817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.501 [INFO][3817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.507 [INFO][3817] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.513 [INFO][3817] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.516 [INFO][3817] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.519 [INFO][3817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.519 [INFO][3817] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.521 [INFO][3817] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.526 [INFO][3817] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.537 [INFO][3817] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.537 [INFO][3817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" host="localhost" Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.537 [INFO][3817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:29.564926 containerd[1435]: 2025-09-05 23:53:29.537 [INFO][3817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" HandleID="k8s-pod-network.a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Workload="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.540 [INFO][3803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86949fc78--xrglm-eth0", GenerateName:"whisker-86949fc78-", Namespace:"calico-system", SelfLink:"", UID:"db624e94-add5-4a50-a526-406e280c364c", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86949fc78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86949fc78-xrglm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaea0bbb390d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.540 [INFO][3803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.540 [INFO][3803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaea0bbb390d ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.547 [INFO][3803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.547 [INFO][3803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86949fc78--xrglm-eth0", GenerateName:"whisker-86949fc78-", Namespace:"calico-system", SelfLink:"", UID:"db624e94-add5-4a50-a526-406e280c364c", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86949fc78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d", Pod:"whisker-86949fc78-xrglm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaea0bbb390d", MAC:"06:58:65:e3:51:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:29.565567 containerd[1435]: 2025-09-05 23:53:29.562 [INFO][3803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d" Namespace="calico-system" Pod="whisker-86949fc78-xrglm" WorkloadEndpoint="localhost-k8s-whisker--86949fc78--xrglm-eth0" Sep 5 23:53:29.604821 containerd[1435]: time="2025-09-05T23:53:29.604715161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:29.604821 containerd[1435]: time="2025-09-05T23:53:29.604769880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:29.604821 containerd[1435]: time="2025-09-05T23:53:29.604782200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:29.605027 containerd[1435]: time="2025-09-05T23:53:29.604862880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:29.633711 systemd[1]: Started cri-containerd-a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d.scope - libcontainer container a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d. Sep 5 23:53:29.644808 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:29.669922 containerd[1435]: time="2025-09-05T23:53:29.669875762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86949fc78-xrglm,Uid:db624e94-add5-4a50-a526-406e280c364c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d\"" Sep 5 23:53:29.673273 containerd[1435]: time="2025-09-05T23:53:29.673222038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:53:29.710298 kubelet[2474]: I0905 23:53:29.710264 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:31.063659 systemd-networkd[1380]: caliaea0bbb390d: Gained IPv6LL Sep 5 23:53:31.259875 containerd[1435]: time="2025-09-05T23:53:31.259829625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.260791 containerd[1435]: time="2025-09-05T23:53:31.260607544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:53:31.261682 containerd[1435]: time="2025-09-05T23:53:31.261644663Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.264494 containerd[1435]: time="2025-09-05T23:53:31.264265140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.265031 containerd[1435]: time="2025-09-05T23:53:31.264982179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.591715861s" Sep 5 23:53:31.265031 containerd[1435]: time="2025-09-05T23:53:31.265016619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:53:31.270194 containerd[1435]: time="2025-09-05T23:53:31.270162293Z" level=info msg="CreateContainer within sandbox \"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:53:31.282080 containerd[1435]: time="2025-09-05T23:53:31.281977840Z" level=info msg="CreateContainer within sandbox \"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5af2f584c07f04ed34cc739e8090cfbabbbb84011a137819018db6bac7f88e5c\"" Sep 5 23:53:31.282742 containerd[1435]: time="2025-09-05T23:53:31.282555759Z" level=info msg="StartContainer for \"5af2f584c07f04ed34cc739e8090cfbabbbb84011a137819018db6bac7f88e5c\"" Sep 5 23:53:31.315650 systemd[1]: Started cri-containerd-5af2f584c07f04ed34cc739e8090cfbabbbb84011a137819018db6bac7f88e5c.scope - libcontainer container 5af2f584c07f04ed34cc739e8090cfbabbbb84011a137819018db6bac7f88e5c. Sep 5 23:53:31.343835 containerd[1435]: time="2025-09-05T23:53:31.343774330Z" level=info msg="StartContainer for \"5af2f584c07f04ed34cc739e8090cfbabbbb84011a137819018db6bac7f88e5c\" returns successfully" Sep 5 23:53:31.345003 containerd[1435]: time="2025-09-05T23:53:31.344944688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:53:32.914532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909733913.mount: Deactivated successfully. Sep 5 23:53:32.955366 containerd[1435]: time="2025-09-05T23:53:32.954784337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.955874 containerd[1435]: time="2025-09-05T23:53:32.955832416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:53:32.956637 containerd[1435]: time="2025-09-05T23:53:32.956602815Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.959214 containerd[1435]: time="2025-09-05T23:53:32.959180532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.960030 containerd[1435]: time="2025-09-05T23:53:32.959998171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.615006443s" Sep 5 23:53:32.960030 containerd[1435]: time="2025-09-05T23:53:32.960033291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:53:32.964368 containerd[1435]: time="2025-09-05T23:53:32.964326167Z" level=info msg="CreateContainer within sandbox \"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:53:32.974453 containerd[1435]: time="2025-09-05T23:53:32.974331116Z" level=info msg="CreateContainer within sandbox \"a8774b818afb286609e57fe3b67afcab0b23a8d742caf3b96bcb35af4d795d0d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3bb177186dce146c92463c64d4dde9963833a65cf4f046a8a6bbb6193617221d\"" Sep 5 23:53:32.974988 containerd[1435]: time="2025-09-05T23:53:32.974808395Z" level=info msg="StartContainer for \"3bb177186dce146c92463c64d4dde9963833a65cf4f046a8a6bbb6193617221d\"" Sep 5 23:53:33.030335 systemd[1]: Started cri-containerd-3bb177186dce146c92463c64d4dde9963833a65cf4f046a8a6bbb6193617221d.scope - libcontainer container 3bb177186dce146c92463c64d4dde9963833a65cf4f046a8a6bbb6193617221d. Sep 5 23:53:33.064517 containerd[1435]: time="2025-09-05T23:53:33.064435218Z" level=info msg="StartContainer for \"3bb177186dce146c92463c64d4dde9963833a65cf4f046a8a6bbb6193617221d\" returns successfully" Sep 5 23:53:35.516904 containerd[1435]: time="2025-09-05T23:53:35.516861732Z" level=info msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" Sep 5 23:53:35.581011 kubelet[2474]: I0905 23:53:35.580516 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86949fc78-xrglm" podStartSLOduration=3.290331819 podStartE2EDuration="6.580496108s" podCreationTimestamp="2025-09-05 23:53:29 +0000 UTC" firstStartedPulling="2025-09-05 23:53:29.672918959 +0000 UTC m=+32.233257196" lastFinishedPulling="2025-09-05 23:53:32.963083248 +0000 UTC m=+35.523421485" observedRunningTime="2025-09-05 23:53:33.736750739 +0000 UTC m=+36.297088976" watchObservedRunningTime="2025-09-05 23:53:35.580496108 +0000 UTC m=+38.140834305" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.579 [INFO][4202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.579 [INFO][4202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" iface="eth0" netns="/var/run/netns/cni-742ac3d6-738a-631d-36d3-c451d3c8969f" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.579 [INFO][4202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" iface="eth0" netns="/var/run/netns/cni-742ac3d6-738a-631d-36d3-c451d3c8969f" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.582 [INFO][4202] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" iface="eth0" netns="/var/run/netns/cni-742ac3d6-738a-631d-36d3-c451d3c8969f" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.583 [INFO][4202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.583 [INFO][4202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.604 [INFO][4212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.604 [INFO][4212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.604 [INFO][4212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.613 [WARNING][4212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.613 [INFO][4212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.619 [INFO][4212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:35.622959 containerd[1435]: 2025-09-05 23:53:35.621 [INFO][4202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:35.623508 containerd[1435]: time="2025-09-05T23:53:35.623086985Z" level=info msg="TearDown network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" successfully" Sep 5 23:53:35.623508 containerd[1435]: time="2025-09-05T23:53:35.623113784Z" level=info msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" returns successfully" Sep 5 23:53:35.624040 containerd[1435]: time="2025-09-05T23:53:35.623824864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-8z7q2,Uid:d49fff1f-0b54-40b8-bccc-fc99d8d97bff,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:53:35.625211 systemd[1]: run-netns-cni\x2d742ac3d6\x2d738a\x2d631d\x2d36d3\x2dc451d3c8969f.mount: Deactivated successfully. Sep 5 23:53:35.760013 systemd-networkd[1380]: cali144ecf1450b: Link UP Sep 5 23:53:35.760660 systemd-networkd[1380]: cali144ecf1450b: Gained carrier Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.682 [INFO][4221] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.696 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0 calico-apiserver-857bdbd65- calico-apiserver d49fff1f-0b54-40b8-bccc-fc99d8d97bff 919 0 2025-09-05 23:53:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857bdbd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-857bdbd65-8z7q2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali144ecf1450b [] [] }} ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.696 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.720 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" HandleID="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.720 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" HandleID="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000511780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-857bdbd65-8z7q2", "timestamp":"2025-09-05 23:53:35.720117646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.720 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.720 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.720 [INFO][4237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.730 [INFO][4237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.734 [INFO][4237] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.737 [INFO][4237] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.740 [INFO][4237] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.743 [INFO][4237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.743 [INFO][4237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.745 [INFO][4237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.749 [INFO][4237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.755 [INFO][4237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.755 [INFO][4237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" host="localhost" Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.755 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:35.773708 containerd[1435]: 2025-09-05 23:53:35.755 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" HandleID="k8s-pod-network.48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.757 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"d49fff1f-0b54-40b8-bccc-fc99d8d97bff", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-857bdbd65-8z7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144ecf1450b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.757 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.758 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali144ecf1450b ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.760 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.760 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"d49fff1f-0b54-40b8-bccc-fc99d8d97bff", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e", Pod:"calico-apiserver-857bdbd65-8z7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144ecf1450b", MAC:"42:8a:68:86:3e:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:35.774366 containerd[1435]: 2025-09-05 23:53:35.771 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-8z7q2" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:35.793501 containerd[1435]: time="2025-09-05T23:53:35.792183813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:35.793501 containerd[1435]: time="2025-09-05T23:53:35.792260253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:35.793501 containerd[1435]: time="2025-09-05T23:53:35.792276373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:35.793501 containerd[1435]: time="2025-09-05T23:53:35.792770452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:35.822667 systemd[1]: Started cri-containerd-48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e.scope - libcontainer container 48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e. Sep 5 23:53:35.834997 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:35.850689 containerd[1435]: time="2025-09-05T23:53:35.850651234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-8z7q2,Uid:d49fff1f-0b54-40b8-bccc-fc99d8d97bff,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e\"" Sep 5 23:53:35.852742 containerd[1435]: time="2025-09-05T23:53:35.852694352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:53:35.937812 kubelet[2474]: I0905 23:53:35.937558 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:36.517761 containerd[1435]: time="2025-09-05T23:53:36.517578250Z" level=info msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" Sep 5 23:53:36.518176 containerd[1435]: time="2025-09-05T23:53:36.517628290Z" level=info msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" Sep 5 23:53:36.518780 containerd[1435]: time="2025-09-05T23:53:36.517699450Z" level=info msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" iface="eth0" netns="/var/run/netns/cni-1df00242-13c5-570f-968f-39af154ff419" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" iface="eth0" netns="/var/run/netns/cni-1df00242-13c5-570f-968f-39af154ff419" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" iface="eth0" netns="/var/run/netns/cni-1df00242-13c5-570f-968f-39af154ff419" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.580 [INFO][4390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.609 [INFO][4414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.609 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.609 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.628 [WARNING][4414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.628 [INFO][4414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.641 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:36.649341 containerd[1435]: 2025-09-05 23:53:36.647 [INFO][4390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:36.649851 containerd[1435]: time="2025-09-05T23:53:36.649503679Z" level=info msg="TearDown network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" successfully" Sep 5 23:53:36.649851 containerd[1435]: time="2025-09-05T23:53:36.649530959Z" level=info msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" returns successfully" Sep 5 23:53:36.650615 containerd[1435]: time="2025-09-05T23:53:36.650169519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lrvz2,Uid:35cf1f2e-da60-43ff-8f52-05621e07097d,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:36.652233 systemd[1]: run-netns-cni\x2d1df00242\x2d13c5\x2d570f\x2d968f\x2d39af154ff419.mount: Deactivated successfully. Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" iface="eth0" netns="/var/run/netns/cni-56dc4ea2-1b7a-0335-f6f8-5d277b78c6af" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" iface="eth0" netns="/var/run/netns/cni-56dc4ea2-1b7a-0335-f6f8-5d277b78c6af" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" iface="eth0" netns="/var/run/netns/cni-56dc4ea2-1b7a-0335-f6f8-5d277b78c6af" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.596 [INFO][4395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.628 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.628 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.644 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.656 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.656 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.658 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:36.663462 containerd[1435]: 2025-09-05 23:53:36.661 [INFO][4395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:36.664962 containerd[1435]: time="2025-09-05T23:53:36.664898024Z" level=info msg="TearDown network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" successfully" Sep 5 23:53:36.664962 containerd[1435]: time="2025-09-05T23:53:36.664958584Z" level=info msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" returns successfully" Sep 5 23:53:36.666015 containerd[1435]: time="2025-09-05T23:53:36.665781743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-rfjbb,Uid:fcbb6ba6-1660-45d6-89cb-7cb16d9b9272,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:53:36.667041 systemd[1]: run-netns-cni\x2d56dc4ea2\x2d1b7a\x2d0335\x2df6f8\x2d5d277b78c6af.mount: Deactivated successfully. Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.585 [INFO][4386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.585 [INFO][4386] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" iface="eth0" netns="/var/run/netns/cni-a8e5e884-94b3-de0f-f777-eaf026ff027e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.585 [INFO][4386] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" iface="eth0" netns="/var/run/netns/cni-a8e5e884-94b3-de0f-f777-eaf026ff027e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.586 [INFO][4386] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" iface="eth0" netns="/var/run/netns/cni-a8e5e884-94b3-de0f-f777-eaf026ff027e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.586 [INFO][4386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.586 [INFO][4386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.631 [INFO][4422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.631 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.658 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.668 [WARNING][4422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.668 [INFO][4422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.670 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:36.674092 containerd[1435]: 2025-09-05 23:53:36.672 [INFO][4386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:36.675154 containerd[1435]: time="2025-09-05T23:53:36.674233135Z" level=info msg="TearDown network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" successfully" Sep 5 23:53:36.675154 containerd[1435]: time="2025-09-05T23:53:36.674257735Z" level=info msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" returns successfully" Sep 5 23:53:36.675203 kubelet[2474]: E0905 23:53:36.674531 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:36.676162 systemd[1]: run-netns-cni\x2da8e5e884\x2d94b3\x2dde0f\x2df777\x2deaf026ff027e.mount: Deactivated successfully. Sep 5 23:53:36.676861 containerd[1435]: time="2025-09-05T23:53:36.676677532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6k77q,Uid:24814833-7fb2-4299-991a-3537be9695bf,Namespace:kube-system,Attempt:1,}" Sep 5 23:53:36.849790 systemd-networkd[1380]: cali77df037e22a: Link UP Sep 5 23:53:36.849999 systemd-networkd[1380]: cali77df037e22a: Gained carrier Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.753 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.769 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--lrvz2-eth0 goldmane-7988f88666- calico-system 35cf1f2e-da60-43ff-8f52-05621e07097d 933 0 2025-09-05 23:53:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-lrvz2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali77df037e22a [] [] }} ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.769 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.807 [INFO][4487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" HandleID="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.807 [INFO][4487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" HandleID="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c26b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-lrvz2", "timestamp":"2025-09-05 23:53:36.807303243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.807 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.807 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.807 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.817 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.822 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.826 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.828 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.830 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.830 [INFO][4487] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.831 [INFO][4487] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.835 [INFO][4487] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4487] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" host="localhost" Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:36.865767 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" HandleID="k8s-pod-network.0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.843 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lrvz2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"35cf1f2e-da60-43ff-8f52-05621e07097d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-lrvz2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77df037e22a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.843 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.843 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77df037e22a ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.846 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.846 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lrvz2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"35cf1f2e-da60-43ff-8f52-05621e07097d", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b", Pod:"goldmane-7988f88666-lrvz2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77df037e22a", MAC:"ba:c3:92:17:54:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:36.866369 containerd[1435]: 2025-09-05 23:53:36.863 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b" Namespace="calico-system" Pod="goldmane-7988f88666-lrvz2" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:36.880673 containerd[1435]: time="2025-09-05T23:53:36.880113131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:36.880673 containerd[1435]: time="2025-09-05T23:53:36.880465211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:36.880673 containerd[1435]: time="2025-09-05T23:53:36.880503491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:36.880673 containerd[1435]: time="2025-09-05T23:53:36.880590651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:36.902699 systemd[1]: Started cri-containerd-0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b.scope - libcontainer container 0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b. Sep 5 23:53:36.928337 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:36.959347 containerd[1435]: time="2025-09-05T23:53:36.959301093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-lrvz2,Uid:35cf1f2e-da60-43ff-8f52-05621e07097d,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b\"" Sep 5 23:53:36.970259 systemd-networkd[1380]: calie0aa493b499: Link UP Sep 5 23:53:36.971616 systemd-networkd[1380]: calie0aa493b499: Gained carrier Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.756 [INFO][4457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.778 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0 calico-apiserver-857bdbd65- calico-apiserver fcbb6ba6-1660-45d6-89cb-7cb16d9b9272 935 0 2025-09-05 23:53:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:857bdbd65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-857bdbd65-rfjbb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0aa493b499 [] [] }} ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.779 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.818 [INFO][4493] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" HandleID="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.818 [INFO][4493] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" HandleID="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-857bdbd65-rfjbb", "timestamp":"2025-09-05 23:53:36.818683072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.818 [INFO][4493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.841 [INFO][4493] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.920 [INFO][4493] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.930 [INFO][4493] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.935 [INFO][4493] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.938 [INFO][4493] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.943 [INFO][4493] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.943 [INFO][4493] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.947 [INFO][4493] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7 Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.954 [INFO][4493] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.961 [INFO][4493] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.961 [INFO][4493] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" host="localhost" Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.961 [INFO][4493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:36.987649 containerd[1435]: 2025-09-05 23:53:36.962 [INFO][4493] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" HandleID="k8s-pod-network.b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.964 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-857bdbd65-rfjbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0aa493b499", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.964 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.965 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0aa493b499 ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.971 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.972 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7", Pod:"calico-apiserver-857bdbd65-rfjbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0aa493b499", MAC:"f2:36:2c:fc:47:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:36.988255 containerd[1435]: 2025-09-05 23:53:36.984 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7" Namespace="calico-apiserver" Pod="calico-apiserver-857bdbd65-rfjbb" WorkloadEndpoint="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:37.012287 containerd[1435]: time="2025-09-05T23:53:37.012111361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.012287 containerd[1435]: time="2025-09-05T23:53:37.012181161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.012287 containerd[1435]: time="2025-09-05T23:53:37.012233001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.012803 containerd[1435]: time="2025-09-05T23:53:37.012331520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.042873 systemd[1]: Started cri-containerd-b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7.scope - libcontainer container b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7. Sep 5 23:53:37.060228 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:37.090015 containerd[1435]: time="2025-09-05T23:53:37.089975165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-857bdbd65-rfjbb,Uid:fcbb6ba6-1660-45d6-89cb-7cb16d9b9272,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7\"" Sep 5 23:53:37.139488 systemd-networkd[1380]: calie991b4a1f0c: Link UP Sep 5 23:53:37.139816 systemd-networkd[1380]: calie991b4a1f0c: Gained carrier Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.768 [INFO][4454] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.784 [INFO][4454] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0 coredns-7c65d6cfc9- kube-system 24814833-7fb2-4299-991a-3537be9695bf 934 0 2025-09-05 23:53:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-6k77q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie991b4a1f0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.784 [INFO][4454] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.820 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" HandleID="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.820 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" HandleID="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-6k77q", "timestamp":"2025-09-05 23:53:36.82017519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.820 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.962 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:36.962 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.021 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.041 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.052 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.055 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.060 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.060 [INFO][4499] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.063 [INFO][4499] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4 Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.074 [INFO][4499] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.135 [INFO][4499] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.135 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" host="localhost" Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.135 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:37.152540 containerd[1435]: 2025-09-05 23:53:37.135 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" HandleID="k8s-pod-network.5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.137 [INFO][4454] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24814833-7fb2-4299-991a-3537be9695bf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-6k77q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie991b4a1f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.137 [INFO][4454] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.137 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie991b4a1f0c ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.140 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.140 [INFO][4454] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24814833-7fb2-4299-991a-3537be9695bf", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4", Pod:"coredns-7c65d6cfc9-6k77q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie991b4a1f0c", MAC:"ea:25:c3:01:b3:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:37.153078 containerd[1435]: 2025-09-05 23:53:37.150 [INFO][4454] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6k77q" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:37.166876 containerd[1435]: time="2025-09-05T23:53:37.166804091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.167208 containerd[1435]: time="2025-09-05T23:53:37.166850571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.167208 containerd[1435]: time="2025-09-05T23:53:37.167084931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.167729 containerd[1435]: time="2025-09-05T23:53:37.167686970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.188683 systemd[1]: Started cri-containerd-5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4.scope - libcontainer container 5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4. Sep 5 23:53:37.200082 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:37.218510 containerd[1435]: time="2025-09-05T23:53:37.218447801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6k77q,Uid:24814833-7fb2-4299-991a-3537be9695bf,Namespace:kube-system,Attempt:1,} returns sandbox id \"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4\"" Sep 5 23:53:37.220614 kubelet[2474]: E0905 23:53:37.220590 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:37.226072 containerd[1435]: time="2025-09-05T23:53:37.226000554Z" level=info msg="CreateContainer within sandbox \"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:53:37.303612 containerd[1435]: time="2025-09-05T23:53:37.303509239Z" level=info msg="CreateContainer within sandbox \"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41d4b76b8dc90307a5f8790568a8bd1dfd75114e87f3f1a94cf9d94a0fcffe29\"" Sep 5 23:53:37.304839 containerd[1435]: time="2025-09-05T23:53:37.304489678Z" level=info msg="StartContainer for \"41d4b76b8dc90307a5f8790568a8bd1dfd75114e87f3f1a94cf9d94a0fcffe29\"" Sep 5 23:53:37.356863 systemd[1]: Started cri-containerd-41d4b76b8dc90307a5f8790568a8bd1dfd75114e87f3f1a94cf9d94a0fcffe29.scope - libcontainer container 41d4b76b8dc90307a5f8790568a8bd1dfd75114e87f3f1a94cf9d94a0fcffe29. Sep 5 23:53:37.401980 containerd[1435]: time="2025-09-05T23:53:37.401792424Z" level=info msg="StartContainer for \"41d4b76b8dc90307a5f8790568a8bd1dfd75114e87f3f1a94cf9d94a0fcffe29\" returns successfully" Sep 5 23:53:37.656641 systemd-networkd[1380]: cali144ecf1450b: Gained IPv6LL Sep 5 23:53:37.745682 kubelet[2474]: E0905 23:53:37.745268 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:37.779902 kubelet[2474]: I0905 23:53:37.779741 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6k77q" podStartSLOduration=34.779724299 podStartE2EDuration="34.779724299s" podCreationTimestamp="2025-09-05 23:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:37.761204997 +0000 UTC m=+40.321543354" watchObservedRunningTime="2025-09-05 23:53:37.779724299 +0000 UTC m=+40.340062536" Sep 5 23:53:38.005801 containerd[1435]: time="2025-09-05T23:53:38.005745361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.006292 containerd[1435]: time="2025-09-05T23:53:38.006242600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:53:38.007142 containerd[1435]: time="2025-09-05T23:53:38.007103439Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.009400 containerd[1435]: time="2025-09-05T23:53:38.009359997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.010134 containerd[1435]: time="2025-09-05T23:53:38.010093436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.157346444s" Sep 5 23:53:38.010134 containerd[1435]: time="2025-09-05T23:53:38.010126036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:53:38.011260 containerd[1435]: time="2025-09-05T23:53:38.011079035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:53:38.012244 containerd[1435]: time="2025-09-05T23:53:38.012214234Z" level=info msg="CreateContainer within sandbox \"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:53:38.023639 containerd[1435]: time="2025-09-05T23:53:38.023594504Z" level=info msg="CreateContainer within sandbox \"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"26dd7e6d91184b9953e728300c80ed6f6005c801727fa45b7c05f386c246fea9\"" Sep 5 23:53:38.024113 containerd[1435]: time="2025-09-05T23:53:38.024051943Z" level=info msg="StartContainer for \"26dd7e6d91184b9953e728300c80ed6f6005c801727fa45b7c05f386c246fea9\"" Sep 5 23:53:38.062682 systemd[1]: Started cri-containerd-26dd7e6d91184b9953e728300c80ed6f6005c801727fa45b7c05f386c246fea9.scope - libcontainer container 26dd7e6d91184b9953e728300c80ed6f6005c801727fa45b7c05f386c246fea9. Sep 5 23:53:38.099045 containerd[1435]: time="2025-09-05T23:53:38.098938192Z" level=info msg="StartContainer for \"26dd7e6d91184b9953e728300c80ed6f6005c801727fa45b7c05f386c246fea9\" returns successfully" Sep 5 23:53:38.359837 systemd-networkd[1380]: cali77df037e22a: Gained IPv6LL Sep 5 23:53:38.517687 containerd[1435]: time="2025-09-05T23:53:38.517629837Z" level=info msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" Sep 5 23:53:38.518787 containerd[1435]: time="2025-09-05T23:53:38.518437796Z" level=info msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.570 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.571 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" iface="eth0" netns="/var/run/netns/cni-66966d56-ed79-cf98-2cac-c36dfaf26295" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.571 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" iface="eth0" netns="/var/run/netns/cni-66966d56-ed79-cf98-2cac-c36dfaf26295" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.571 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" iface="eth0" netns="/var/run/netns/cni-66966d56-ed79-cf98-2cac-c36dfaf26295" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.571 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.571 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.592 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.593 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.593 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.604 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.604 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.605 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.612244 containerd[1435]: 2025-09-05 23:53:38.607 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:38.614612 containerd[1435]: time="2025-09-05T23:53:38.614487186Z" level=info msg="TearDown network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" successfully" Sep 5 23:53:38.614612 containerd[1435]: time="2025-09-05T23:53:38.614521866Z" level=info msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" returns successfully" Sep 5 23:53:38.615913 kubelet[2474]: E0905 23:53:38.615465 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:38.616221 containerd[1435]: time="2025-09-05T23:53:38.616189504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc2b4,Uid:b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3,Namespace:kube-system,Attempt:1,}" Sep 5 23:53:38.625995 systemd[1]: run-netns-cni\x2d66966d56\x2ded79\x2dcf98\x2d2cac\x2dc36dfaf26295.mount: Deactivated successfully. Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.580 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.580 [INFO][4821] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" iface="eth0" netns="/var/run/netns/cni-44145019-f34c-4621-5d93-8a6aebd3dc48" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.580 [INFO][4821] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" iface="eth0" netns="/var/run/netns/cni-44145019-f34c-4621-5d93-8a6aebd3dc48" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.581 [INFO][4821] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" iface="eth0" netns="/var/run/netns/cni-44145019-f34c-4621-5d93-8a6aebd3dc48" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.581 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.581 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.607 [INFO][4846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.607 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.607 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.620 [WARNING][4846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.620 [INFO][4846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.622 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.627672 containerd[1435]: 2025-09-05 23:53:38.624 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:38.628079 containerd[1435]: time="2025-09-05T23:53:38.627967733Z" level=info msg="TearDown network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" successfully" Sep 5 23:53:38.628079 containerd[1435]: time="2025-09-05T23:53:38.627992773Z" level=info msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" returns successfully" Sep 5 23:53:38.630208 systemd[1]: run-netns-cni\x2d44145019\x2df34c\x2d4621\x2d5d93\x2d8a6aebd3dc48.mount: Deactivated successfully. Sep 5 23:53:38.630912 containerd[1435]: time="2025-09-05T23:53:38.630883290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db9444479-4dqph,Uid:5e413f16-c817-4175-8b6a-241c2e2a0ced,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:38.679663 systemd-networkd[1380]: calie991b4a1f0c: Gained IPv6LL Sep 5 23:53:38.757543 kubelet[2474]: E0905 23:53:38.757503 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:38.781479 systemd-networkd[1380]: cali344441d10e4: Link UP Sep 5 23:53:38.781729 systemd-networkd[1380]: cali344441d10e4: Gained carrier Sep 5 23:53:38.795658 kubelet[2474]: I0905 23:53:38.795592 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857bdbd65-8z7q2" podStartSLOduration=24.636752731 podStartE2EDuration="26.795574215s" podCreationTimestamp="2025-09-05 23:53:12 +0000 UTC" firstStartedPulling="2025-09-05 23:53:35.851999152 +0000 UTC m=+38.412337389" lastFinishedPulling="2025-09-05 23:53:38.010820636 +0000 UTC m=+40.571158873" observedRunningTime="2025-09-05 23:53:38.774031395 +0000 UTC m=+41.334369632" watchObservedRunningTime="2025-09-05 23:53:38.795574215 +0000 UTC m=+41.355912452" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.669 [INFO][4854] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.682 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0 coredns-7c65d6cfc9- kube-system b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3 971 0 2025-09-05 23:53:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-hc2b4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali344441d10e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.682 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.720 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" HandleID="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.721 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" HandleID="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-hc2b4", "timestamp":"2025-09-05 23:53:38.720888845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.721 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.722 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.722 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.735 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.740 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.744 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.746 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.749 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.749 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.752 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5 Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.755 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" host="localhost" Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.799557 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" HandleID="k8s-pod-network.fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.777 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-hc2b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali344441d10e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.777 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.777 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali344441d10e4 ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.784 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.784 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5", Pod:"coredns-7c65d6cfc9-hc2b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali344441d10e4", MAC:"b6:08:1a:a7:9c:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.800306 containerd[1435]: 2025-09-05 23:53:38.796 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hc2b4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:38.820687 containerd[1435]: time="2025-09-05T23:53:38.820587551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:38.820687 containerd[1435]: time="2025-09-05T23:53:38.820647711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:38.820687 containerd[1435]: time="2025-09-05T23:53:38.820663151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.821674 containerd[1435]: time="2025-09-05T23:53:38.821570830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.838111 systemd[1]: Started cri-containerd-fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5.scope - libcontainer container fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5. Sep 5 23:53:38.855991 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:38.888583 containerd[1435]: time="2025-09-05T23:53:38.888072367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hc2b4,Uid:b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5\"" Sep 5 23:53:38.888748 systemd-networkd[1380]: calibb33f64fa52: Link UP Sep 5 23:53:38.889082 systemd-networkd[1380]: calibb33f64fa52: Gained carrier Sep 5 23:53:38.894359 kubelet[2474]: E0905 23:53:38.894334 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:38.898460 containerd[1435]: time="2025-09-05T23:53:38.898417877Z" level=info msg="CreateContainer within sandbox \"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.683 [INFO][4864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.701 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0 calico-kube-controllers-6db9444479- calico-system 5e413f16-c817-4175-8b6a-241c2e2a0ced 972 0 2025-09-05 23:53:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6db9444479 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6db9444479-4dqph eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb33f64fa52 [] [] }} ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.701 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.741 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" HandleID="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.741 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" HandleID="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6db9444479-4dqph", "timestamp":"2025-09-05 23:53:38.741089306 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.741 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.774 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.838 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.847 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.853 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.857 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.862 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.862 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.864 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8 Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.872 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.883 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.883 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" host="localhost" Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.883 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.908427 containerd[1435]: 2025-09-05 23:53:38.883 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" HandleID="k8s-pod-network.607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.886 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0", GenerateName:"calico-kube-controllers-6db9444479-", Namespace:"calico-system", SelfLink:"", UID:"5e413f16-c817-4175-8b6a-241c2e2a0ced", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db9444479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6db9444479-4dqph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb33f64fa52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.886 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.886 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb33f64fa52 ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.889 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.889 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0", GenerateName:"calico-kube-controllers-6db9444479-", Namespace:"calico-system", SelfLink:"", UID:"5e413f16-c817-4175-8b6a-241c2e2a0ced", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db9444479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8", Pod:"calico-kube-controllers-6db9444479-4dqph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb33f64fa52", MAC:"56:9e:b0:22:09:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.909271 containerd[1435]: 2025-09-05 23:53:38.905 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8" Namespace="calico-system" Pod="calico-kube-controllers-6db9444479-4dqph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:38.917913 containerd[1435]: time="2025-09-05T23:53:38.917872499Z" level=info msg="CreateContainer within sandbox \"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b116e76fc4be8caafd791fb46b42e69b6442216d483cc8a0c7923afdfa4fbcd\"" Sep 5 23:53:38.918936 containerd[1435]: time="2025-09-05T23:53:38.918884818Z" level=info msg="StartContainer for \"5b116e76fc4be8caafd791fb46b42e69b6442216d483cc8a0c7923afdfa4fbcd\"" Sep 5 23:53:38.926254 containerd[1435]: time="2025-09-05T23:53:38.926173531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:38.926254 containerd[1435]: time="2025-09-05T23:53:38.926234171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:38.926368 containerd[1435]: time="2025-09-05T23:53:38.926249851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.926368 containerd[1435]: time="2025-09-05T23:53:38.926327931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.945657 systemd[1]: Started cri-containerd-5b116e76fc4be8caafd791fb46b42e69b6442216d483cc8a0c7923afdfa4fbcd.scope - libcontainer container 5b116e76fc4be8caafd791fb46b42e69b6442216d483cc8a0c7923afdfa4fbcd. Sep 5 23:53:38.947797 systemd[1]: Started cri-containerd-607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8.scope - libcontainer container 607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8. Sep 5 23:53:38.967113 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:38.973747 containerd[1435]: time="2025-09-05T23:53:38.973703006Z" level=info msg="StartContainer for \"5b116e76fc4be8caafd791fb46b42e69b6442216d483cc8a0c7923afdfa4fbcd\" returns successfully" Sep 5 23:53:38.990065 containerd[1435]: time="2025-09-05T23:53:38.990024351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db9444479-4dqph,Uid:5e413f16-c817-4175-8b6a-241c2e2a0ced,Namespace:calico-system,Attempt:1,} returns sandbox id \"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8\"" Sep 5 23:53:38.999588 systemd-networkd[1380]: calie0aa493b499: Gained IPv6LL Sep 5 23:53:39.517978 containerd[1435]: time="2025-09-05T23:53:39.517857223Z" level=info msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.591 [INFO][5068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.591 [INFO][5068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" iface="eth0" netns="/var/run/netns/cni-72de4bec-1e01-38d0-b86b-c86a209dc382" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.593 [INFO][5068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" iface="eth0" netns="/var/run/netns/cni-72de4bec-1e01-38d0-b86b-c86a209dc382" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.593 [INFO][5068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" iface="eth0" netns="/var/run/netns/cni-72de4bec-1e01-38d0-b86b-c86a209dc382" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.593 [INFO][5068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.593 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.633 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.633 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.634 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.644 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.644 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.649 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.654426 containerd[1435]: 2025-09-05 23:53:39.651 [INFO][5068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:39.657116 systemd[1]: run-netns-cni\x2d72de4bec\x2d1e01\x2d38d0\x2db86b\x2dc86a209dc382.mount: Deactivated successfully. Sep 5 23:53:39.658134 containerd[1435]: time="2025-09-05T23:53:39.657806814Z" level=info msg="TearDown network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" successfully" Sep 5 23:53:39.658134 containerd[1435]: time="2025-09-05T23:53:39.657848614Z" level=info msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" returns successfully" Sep 5 23:53:39.659480 containerd[1435]: time="2025-09-05T23:53:39.659408492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxc2c,Uid:d08ad477-f492-4278-9581-c6fba1569e81,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:39.766501 kubelet[2474]: I0905 23:53:39.764849 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:39.766501 kubelet[2474]: E0905 23:53:39.765386 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:39.767746 kubelet[2474]: E0905 23:53:39.767714 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:39.782927 kubelet[2474]: I0905 23:53:39.780595 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hc2b4" podStartSLOduration=36.78057858 podStartE2EDuration="36.78057858s" podCreationTimestamp="2025-09-05 23:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:39.780131981 +0000 UTC m=+42.340470218" watchObservedRunningTime="2025-09-05 23:53:39.78057858 +0000 UTC m=+42.340916817" Sep 5 23:53:39.850016 systemd-networkd[1380]: calib3d8f108aaf: Link UP Sep 5 23:53:39.850232 systemd-networkd[1380]: calib3d8f108aaf: Gained carrier Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.711 [INFO][5093] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.728 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nxc2c-eth0 csi-node-driver- calico-system d08ad477-f492-4278-9581-c6fba1569e81 990 0 2025-09-05 23:53:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nxc2c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib3d8f108aaf [] [] }} ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.728 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.772 [INFO][5111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" HandleID="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.772 [INFO][5111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" HandleID="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001186b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nxc2c", "timestamp":"2025-09-05 23:53:39.772279268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.772 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.772 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.772 [INFO][5111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.789 [INFO][5111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.800 [INFO][5111] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.818 [INFO][5111] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.821 [INFO][5111] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.824 [INFO][5111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.824 [INFO][5111] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.827 [INFO][5111] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2 Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.831 [INFO][5111] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.840 [INFO][5111] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.841 [INFO][5111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" host="localhost" Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.841 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.867502 containerd[1435]: 2025-09-05 23:53:39.841 [INFO][5111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" HandleID="k8s-pod-network.93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.844 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxc2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ad477-f492-4278-9581-c6fba1569e81", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nxc2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3d8f108aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.845 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.845 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3d8f108aaf ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.849 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.852 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxc2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ad477-f492-4278-9581-c6fba1569e81", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2", Pod:"csi-node-driver-nxc2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3d8f108aaf", MAC:"9e:95:76:a4:bf:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.868248 containerd[1435]: 2025-09-05 23:53:39.864 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2" Namespace="calico-system" Pod="csi-node-driver-nxc2c" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:39.887488 containerd[1435]: time="2025-09-05T23:53:39.887366001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:39.887879 containerd[1435]: time="2025-09-05T23:53:39.887834121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:39.887911 containerd[1435]: time="2025-09-05T23:53:39.887869281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.888270 containerd[1435]: time="2025-09-05T23:53:39.888202641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.912823 systemd[1]: Started cri-containerd-93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2.scope - libcontainer container 93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2. Sep 5 23:53:39.929800 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 23:53:39.942244 containerd[1435]: time="2025-09-05T23:53:39.942204591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxc2c,Uid:d08ad477-f492-4278-9581-c6fba1569e81,Namespace:calico-system,Attempt:1,} returns sandbox id \"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2\"" Sep 5 23:53:39.960596 systemd-networkd[1380]: cali344441d10e4: Gained IPv6LL Sep 5 23:53:40.344164 containerd[1435]: time="2025-09-05T23:53:40.344113986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:40.345422 containerd[1435]: time="2025-09-05T23:53:40.345236825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:53:40.346669 containerd[1435]: time="2025-09-05T23:53:40.346443184Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:40.352705 containerd[1435]: time="2025-09-05T23:53:40.352627978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:40.353562 containerd[1435]: time="2025-09-05T23:53:40.353525018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.342410783s" Sep 5 23:53:40.353562 containerd[1435]: time="2025-09-05T23:53:40.353559057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:53:40.354943 containerd[1435]: time="2025-09-05T23:53:40.354834456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:53:40.357414 containerd[1435]: time="2025-09-05T23:53:40.357379694Z" level=info msg="CreateContainer within sandbox \"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:53:40.381651 containerd[1435]: time="2025-09-05T23:53:40.381599312Z" level=info msg="CreateContainer within sandbox \"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"590ad9b1b29fa782abb84e6d615b3e6ba1866cb4f247f33d40c0b6abea76b37e\"" Sep 5 23:53:40.382846 containerd[1435]: time="2025-09-05T23:53:40.382795471Z" level=info msg="StartContainer for \"590ad9b1b29fa782abb84e6d615b3e6ba1866cb4f247f33d40c0b6abea76b37e\"" Sep 5 23:53:40.410719 systemd[1]: Started cri-containerd-590ad9b1b29fa782abb84e6d615b3e6ba1866cb4f247f33d40c0b6abea76b37e.scope - libcontainer container 590ad9b1b29fa782abb84e6d615b3e6ba1866cb4f247f33d40c0b6abea76b37e. Sep 5 23:53:40.449866 containerd[1435]: time="2025-09-05T23:53:40.449825290Z" level=info msg="StartContainer for \"590ad9b1b29fa782abb84e6d615b3e6ba1866cb4f247f33d40c0b6abea76b37e\" returns successfully" Sep 5 23:53:40.535608 systemd-networkd[1380]: calibb33f64fa52: Gained IPv6LL Sep 5 23:53:40.585105 containerd[1435]: time="2025-09-05T23:53:40.585032928Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:40.586411 containerd[1435]: time="2025-09-05T23:53:40.586335327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:53:40.589959 containerd[1435]: time="2025-09-05T23:53:40.589872764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 235.004788ms" Sep 5 23:53:40.589959 containerd[1435]: time="2025-09-05T23:53:40.589906444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:53:40.591801 containerd[1435]: time="2025-09-05T23:53:40.591147203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:53:40.593398 containerd[1435]: time="2025-09-05T23:53:40.593306801Z" level=info msg="CreateContainer within sandbox \"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:53:40.608663 containerd[1435]: time="2025-09-05T23:53:40.608554307Z" level=info msg="CreateContainer within sandbox \"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1b4cd99341866cf1126ed84d419e11c71d51a619bfd0e06f0cbe90e052c96b0\"" Sep 5 23:53:40.611229 containerd[1435]: time="2025-09-05T23:53:40.610734185Z" level=info msg="StartContainer for \"d1b4cd99341866cf1126ed84d419e11c71d51a619bfd0e06f0cbe90e052c96b0\"" Sep 5 23:53:40.631954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188613274.mount: Deactivated successfully. Sep 5 23:53:40.655801 systemd[1]: Started cri-containerd-d1b4cd99341866cf1126ed84d419e11c71d51a619bfd0e06f0cbe90e052c96b0.scope - libcontainer container d1b4cd99341866cf1126ed84d419e11c71d51a619bfd0e06f0cbe90e052c96b0. Sep 5 23:53:40.712802 containerd[1435]: time="2025-09-05T23:53:40.712738692Z" level=info msg="StartContainer for \"d1b4cd99341866cf1126ed84d419e11c71d51a619bfd0e06f0cbe90e052c96b0\" returns successfully" Sep 5 23:53:40.778600 kubelet[2474]: E0905 23:53:40.778556 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:40.782342 kubelet[2474]: I0905 23:53:40.782271 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-857bdbd65-rfjbb" podStartSLOduration=25.282677151 podStartE2EDuration="28.78225683s" podCreationTimestamp="2025-09-05 23:53:12 +0000 UTC" firstStartedPulling="2025-09-05 23:53:37.091306324 +0000 UTC m=+39.651644561" lastFinishedPulling="2025-09-05 23:53:40.590886003 +0000 UTC m=+43.151224240" observedRunningTime="2025-09-05 23:53:40.78207027 +0000 UTC m=+43.342408507" watchObservedRunningTime="2025-09-05 23:53:40.78225683 +0000 UTC m=+43.342595067" Sep 5 23:53:40.795563 kubelet[2474]: I0905 23:53:40.795415 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-lrvz2" podStartSLOduration=20.401906252 podStartE2EDuration="23.795381698s" podCreationTimestamp="2025-09-05 23:53:17 +0000 UTC" firstStartedPulling="2025-09-05 23:53:36.961133051 +0000 UTC m=+39.521471288" lastFinishedPulling="2025-09-05 23:53:40.354608497 +0000 UTC m=+42.914946734" observedRunningTime="2025-09-05 23:53:40.795101418 +0000 UTC m=+43.355439655" watchObservedRunningTime="2025-09-05 23:53:40.795381698 +0000 UTC m=+43.355719935" Sep 5 23:53:41.165804 kubelet[2474]: I0905 23:53:41.165609 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:41.166097 kubelet[2474]: E0905 23:53:41.165986 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:41.303607 systemd-networkd[1380]: calib3d8f108aaf: Gained IPv6LL Sep 5 23:53:41.790629 kubelet[2474]: I0905 23:53:41.780528 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:41.790629 kubelet[2474]: E0905 23:53:41.782603 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:41.790629 kubelet[2474]: E0905 23:53:41.783536 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:53:42.344498 kernel: bpftool[5350]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:53:42.543810 systemd-networkd[1380]: vxlan.calico: Link UP Sep 5 23:53:42.543817 systemd-networkd[1380]: vxlan.calico: Gained carrier Sep 5 23:53:43.159713 kubelet[2474]: I0905 23:53:43.159680 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:43.749750 containerd[1435]: time="2025-09-05T23:53:43.749706276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.750555 containerd[1435]: time="2025-09-05T23:53:43.750521035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:53:43.751547 containerd[1435]: time="2025-09-05T23:53:43.751519074Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.753821 containerd[1435]: time="2025-09-05T23:53:43.753792552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.754408 containerd[1435]: time="2025-09-05T23:53:43.754378472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.16320083s" Sep 5 23:53:43.754475 containerd[1435]: time="2025-09-05T23:53:43.754411112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:53:43.758393 containerd[1435]: time="2025-09-05T23:53:43.758364148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:53:43.766082 containerd[1435]: time="2025-09-05T23:53:43.766038422Z" level=info msg="CreateContainer within sandbox \"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:53:43.793979 containerd[1435]: time="2025-09-05T23:53:43.793925798Z" level=info msg="CreateContainer within sandbox \"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"70268b58564fe2c306451708f5c4885dd2c03db0e4a2d0c3d102999ae8c45dff\"" Sep 5 23:53:43.794376 containerd[1435]: time="2025-09-05T23:53:43.794294517Z" level=info msg="StartContainer for \"70268b58564fe2c306451708f5c4885dd2c03db0e4a2d0c3d102999ae8c45dff\"" Sep 5 23:53:43.843634 systemd[1]: Started cri-containerd-70268b58564fe2c306451708f5c4885dd2c03db0e4a2d0c3d102999ae8c45dff.scope - libcontainer container 70268b58564fe2c306451708f5c4885dd2c03db0e4a2d0c3d102999ae8c45dff. Sep 5 23:53:43.864633 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Sep 5 23:53:43.887967 containerd[1435]: time="2025-09-05T23:53:43.887925077Z" level=info msg="StartContainer for \"70268b58564fe2c306451708f5c4885dd2c03db0e4a2d0c3d102999ae8c45dff\" returns successfully" Sep 5 23:53:44.852493 kubelet[2474]: I0905 23:53:44.852290 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6db9444479-4dqph" podStartSLOduration=23.085998788 podStartE2EDuration="27.852268147s" podCreationTimestamp="2025-09-05 23:53:17 +0000 UTC" firstStartedPulling="2025-09-05 23:53:38.99107683 +0000 UTC m=+41.551415067" lastFinishedPulling="2025-09-05 23:53:43.757346189 +0000 UTC m=+46.317684426" observedRunningTime="2025-09-05 23:53:44.817565776 +0000 UTC m=+47.377904013" watchObservedRunningTime="2025-09-05 23:53:44.852268147 +0000 UTC m=+47.412606424" Sep 5 23:53:45.008498 containerd[1435]: time="2025-09-05T23:53:45.008415296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:45.009766 containerd[1435]: time="2025-09-05T23:53:45.009731855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:53:45.010643 containerd[1435]: time="2025-09-05T23:53:45.010605774Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:45.013129 containerd[1435]: time="2025-09-05T23:53:45.013084852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:45.013712 containerd[1435]: time="2025-09-05T23:53:45.013688851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.255289703s" Sep 5 23:53:45.013805 containerd[1435]: time="2025-09-05T23:53:45.013718451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:53:45.016023 containerd[1435]: time="2025-09-05T23:53:45.015997089Z" level=info msg="CreateContainer within sandbox \"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:53:45.054527 containerd[1435]: time="2025-09-05T23:53:45.054452058Z" level=info msg="CreateContainer within sandbox \"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a399563b21acb8297c16d43cd1cb2158c2a87c76eeaf3a3bf710d130ff54ef85\"" Sep 5 23:53:45.055114 containerd[1435]: time="2025-09-05T23:53:45.055068497Z" level=info msg="StartContainer for \"a399563b21acb8297c16d43cd1cb2158c2a87c76eeaf3a3bf710d130ff54ef85\"" Sep 5 23:53:45.089673 systemd[1]: Started cri-containerd-a399563b21acb8297c16d43cd1cb2158c2a87c76eeaf3a3bf710d130ff54ef85.scope - libcontainer container a399563b21acb8297c16d43cd1cb2158c2a87c76eeaf3a3bf710d130ff54ef85. Sep 5 23:53:45.146970 containerd[1435]: time="2025-09-05T23:53:45.146845261Z" level=info msg="StartContainer for \"a399563b21acb8297c16d43cd1cb2158c2a87c76eeaf3a3bf710d130ff54ef85\" returns successfully" Sep 5 23:53:45.148462 containerd[1435]: time="2025-09-05T23:53:45.148435740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:53:46.048672 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Sep 5 23:53:46.117538 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:53:46.119019 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:46.130566 systemd-logind[1422]: New session 8 of user core. Sep 5 23:53:46.141665 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:53:46.345705 containerd[1435]: time="2025-09-05T23:53:46.344791517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:53:46.345705 containerd[1435]: time="2025-09-05T23:53:46.344848957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:46.346048 containerd[1435]: time="2025-09-05T23:53:46.345980116Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:46.348663 containerd[1435]: time="2025-09-05T23:53:46.348623594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:46.349406 containerd[1435]: time="2025-09-05T23:53:46.349380713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.200695653s" Sep 5 23:53:46.349458 containerd[1435]: time="2025-09-05T23:53:46.349412513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:53:46.353082 containerd[1435]: time="2025-09-05T23:53:46.353019910Z" level=info msg="CreateContainer within sandbox \"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:53:46.369125 containerd[1435]: time="2025-09-05T23:53:46.369087657Z" level=info msg="CreateContainer within sandbox \"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c029060262c48779a013a36cbddc8add6ea22167152bd3cb3ba5fa973c374a5d\"" Sep 5 23:53:46.369861 containerd[1435]: time="2025-09-05T23:53:46.369833097Z" level=info msg="StartContainer for \"c029060262c48779a013a36cbddc8add6ea22167152bd3cb3ba5fa973c374a5d\"" Sep 5 23:53:46.371347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088418217.mount: Deactivated successfully. Sep 5 23:53:46.404957 systemd[1]: Started cri-containerd-c029060262c48779a013a36cbddc8add6ea22167152bd3cb3ba5fa973c374a5d.scope - libcontainer container c029060262c48779a013a36cbddc8add6ea22167152bd3cb3ba5fa973c374a5d. Sep 5 23:53:46.438582 containerd[1435]: time="2025-09-05T23:53:46.438520041Z" level=info msg="StartContainer for \"c029060262c48779a013a36cbddc8add6ea22167152bd3cb3ba5fa973c374a5d\" returns successfully" Sep 5 23:53:46.517403 sshd[5602]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:46.522247 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:53926.service: Deactivated successfully. Sep 5 23:53:46.523905 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:53:46.525210 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:53:46.526345 systemd-logind[1422]: Removed session 8. Sep 5 23:53:46.613143 kubelet[2474]: I0905 23:53:46.612513 2474 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:53:46.615960 kubelet[2474]: I0905 23:53:46.615929 2474 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:53:46.812971 kubelet[2474]: I0905 23:53:46.812801 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nxc2c" podStartSLOduration=23.406204973 podStartE2EDuration="29.812784737s" podCreationTimestamp="2025-09-05 23:53:17 +0000 UTC" firstStartedPulling="2025-09-05 23:53:39.943727349 +0000 UTC m=+42.504065546" lastFinishedPulling="2025-09-05 23:53:46.350307073 +0000 UTC m=+48.910645310" observedRunningTime="2025-09-05 23:53:46.80956014 +0000 UTC m=+49.369898377" watchObservedRunningTime="2025-09-05 23:53:46.812784737 +0000 UTC m=+49.373122934" Sep 5 23:53:51.531525 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:33616.service - OpenSSH per-connection server daemon (10.0.0.1:33616). Sep 5 23:53:51.568039 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 33616 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:53:51.569426 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:51.573724 systemd-logind[1422]: New session 9 of user core. Sep 5 23:53:51.585679 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:53:51.816226 sshd[5673]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:51.821978 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:53:51.822236 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:33616.service: Deactivated successfully. Sep 5 23:53:51.824440 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:53:51.827653 systemd-logind[1422]: Removed session 9. Sep 5 23:53:56.827639 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:33630.service - OpenSSH per-connection server daemon (10.0.0.1:33630). Sep 5 23:53:56.902442 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 33630 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:53:56.904209 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:56.910847 systemd-logind[1422]: New session 10 of user core. Sep 5 23:53:56.922782 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:53:57.085642 sshd[5718]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:57.094675 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:33630.service: Deactivated successfully. Sep 5 23:53:57.097798 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:53:57.099292 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:53:57.108811 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:33636.service - OpenSSH per-connection server daemon (10.0.0.1:33636). Sep 5 23:53:57.109927 systemd-logind[1422]: Removed session 10. Sep 5 23:53:57.151900 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:53:57.154658 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:57.162240 systemd-logind[1422]: New session 11 of user core. Sep 5 23:53:57.172689 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:53:57.385576 sshd[5733]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:57.394881 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:33636.service: Deactivated successfully. Sep 5 23:53:57.396809 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:53:57.399285 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:53:57.410121 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:33646.service - OpenSSH per-connection server daemon (10.0.0.1:33646). Sep 5 23:53:57.413131 systemd-logind[1422]: Removed session 11. Sep 5 23:53:57.454794 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 33646 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:53:57.456446 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:57.462819 systemd-logind[1422]: New session 12 of user core. Sep 5 23:53:57.474677 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:53:57.541160 containerd[1435]: time="2025-09-05T23:53:57.540896158Z" level=info msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.585 [WARNING][5768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5", Pod:"coredns-7c65d6cfc9-hc2b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali344441d10e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.586 [INFO][5768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.586 [INFO][5768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" iface="eth0" netns="" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.586 [INFO][5768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.586 [INFO][5768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.611 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.611 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.611 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.625 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.625 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.626 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:57.632725 containerd[1435]: 2025-09-05 23:53:57.630 [INFO][5768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.634576 containerd[1435]: time="2025-09-05T23:53:57.632766773Z" level=info msg="TearDown network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" successfully" Sep 5 23:53:57.634576 containerd[1435]: time="2025-09-05T23:53:57.632793853Z" level=info msg="StopPodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" returns successfully" Sep 5 23:53:57.634849 containerd[1435]: time="2025-09-05T23:53:57.634814091Z" level=info msg="RemovePodSandbox for \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" Sep 5 23:53:57.636502 containerd[1435]: time="2025-09-05T23:53:57.636407130Z" level=info msg="Forcibly stopping sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\"" Sep 5 23:53:57.699693 sshd[5746]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:57.704797 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:33646.service: Deactivated successfully. Sep 5 23:53:57.706728 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:53:57.709604 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:53:57.711325 systemd-logind[1422]: Removed session 12. Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.676 [WARNING][5795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b824ed2d-3ec1-40d0-9ed1-af7f8cdb6fb3", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa2a87f8460b934d1ed6f46b6ff5afdc0804a5ce88e33539eb4f35b1ac120df5", Pod:"coredns-7c65d6cfc9-hc2b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali344441d10e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.677 [INFO][5795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.677 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" iface="eth0" netns="" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.677 [INFO][5795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.677 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.699 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.699 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.699 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.711 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.711 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" HandleID="k8s-pod-network.e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Workload="localhost-k8s-coredns--7c65d6cfc9--hc2b4-eth0" Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.713 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:57.718266 containerd[1435]: 2025-09-05 23:53:57.716 [INFO][5795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3" Sep 5 23:53:57.718702 containerd[1435]: time="2025-09-05T23:53:57.718314352Z" level=info msg="TearDown network for sandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" successfully" Sep 5 23:53:57.790800 containerd[1435]: time="2025-09-05T23:53:57.790548820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:57.790800 containerd[1435]: time="2025-09-05T23:53:57.790649860Z" level=info msg="RemovePodSandbox \"e6c748219ce61ddeab71922913149d13a231578734b040485269f6e3a744d9b3\" returns successfully" Sep 5 23:53:57.791542 containerd[1435]: time="2025-09-05T23:53:57.791178980Z" level=info msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.834 [WARNING][5823] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0", GenerateName:"calico-kube-controllers-6db9444479-", Namespace:"calico-system", SelfLink:"", UID:"5e413f16-c817-4175-8b6a-241c2e2a0ced", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db9444479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8", Pod:"calico-kube-controllers-6db9444479-4dqph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb33f64fa52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.834 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.834 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" iface="eth0" netns="" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.834 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.834 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.852 [INFO][5832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.852 [INFO][5832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.852 [INFO][5832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.875 [WARNING][5832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.875 [INFO][5832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.877 [INFO][5832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:57.883231 containerd[1435]: 2025-09-05 23:53:57.881 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.883929 containerd[1435]: time="2025-09-05T23:53:57.883801674Z" level=info msg="TearDown network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" successfully" Sep 5 23:53:57.883929 containerd[1435]: time="2025-09-05T23:53:57.883842274Z" level=info msg="StopPodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" returns successfully" Sep 5 23:53:57.884380 containerd[1435]: time="2025-09-05T23:53:57.884355193Z" level=info msg="RemovePodSandbox for \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" Sep 5 23:53:57.884435 containerd[1435]: time="2025-09-05T23:53:57.884389353Z" level=info msg="Forcibly stopping sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\"" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.929 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0", GenerateName:"calico-kube-controllers-6db9444479-", Namespace:"calico-system", SelfLink:"", UID:"5e413f16-c817-4175-8b6a-241c2e2a0ced", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db9444479", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"607a51064b0e92e12c5443de2d362053fc880db6708620ffd5469fd45bdaaec8", Pod:"calico-kube-controllers-6db9444479-4dqph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb33f64fa52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.929 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.929 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" iface="eth0" netns="" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.929 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.929 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.948 [INFO][5860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.948 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.948 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.957 [WARNING][5860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.957 [INFO][5860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" HandleID="k8s-pod-network.09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Workload="localhost-k8s-calico--kube--controllers--6db9444479--4dqph-eth0" Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.959 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:57.962505 containerd[1435]: 2025-09-05 23:53:57.960 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b" Sep 5 23:53:57.962505 containerd[1435]: time="2025-09-05T23:53:57.962377618Z" level=info msg="TearDown network for sandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" successfully" Sep 5 23:53:57.966879 containerd[1435]: time="2025-09-05T23:53:57.966840294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:57.966939 containerd[1435]: time="2025-09-05T23:53:57.966906214Z" level=info msg="RemovePodSandbox \"09754f707cd44ceb5348d1f918902b5c4ac6e26c7008fa67f1a7cd6afa07fc2b\" returns successfully" Sep 5 23:53:57.968385 containerd[1435]: time="2025-09-05T23:53:57.968356893Z" level=info msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.009 [WARNING][5877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24814833-7fb2-4299-991a-3537be9695bf", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4", Pod:"coredns-7c65d6cfc9-6k77q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie991b4a1f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.009 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.009 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" iface="eth0" netns="" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.009 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.009 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.028 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.029 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.029 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.038 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.038 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.039 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.043215 containerd[1435]: 2025-09-05 23:53:58.041 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.043640 containerd[1435]: time="2025-09-05T23:53:58.043243840Z" level=info msg="TearDown network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" successfully" Sep 5 23:53:58.043640 containerd[1435]: time="2025-09-05T23:53:58.043268320Z" level=info msg="StopPodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" returns successfully" Sep 5 23:53:58.044194 containerd[1435]: time="2025-09-05T23:53:58.043925000Z" level=info msg="RemovePodSandbox for \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" Sep 5 23:53:58.044194 containerd[1435]: time="2025-09-05T23:53:58.043964080Z" level=info msg="Forcibly stopping sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\"" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.080 [WARNING][5903] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24814833-7fb2-4299-991a-3537be9695bf", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eb39047cc3d4071e9e079502031615292e3aee4e53b9d4eb69a8e2a960e3fa4", Pod:"coredns-7c65d6cfc9-6k77q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie991b4a1f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.080 [INFO][5903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.081 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" iface="eth0" netns="" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.081 [INFO][5903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.081 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.112 [INFO][5912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.112 [INFO][5912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.112 [INFO][5912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.122 [WARNING][5912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.122 [INFO][5912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" HandleID="k8s-pod-network.099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Workload="localhost-k8s-coredns--7c65d6cfc9--6k77q-eth0" Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.123 [INFO][5912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.126614 containerd[1435]: 2025-09-05 23:53:58.125 [INFO][5903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e" Sep 5 23:53:58.127113 containerd[1435]: time="2025-09-05T23:53:58.126654261Z" level=info msg="TearDown network for sandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" successfully" Sep 5 23:53:58.130273 containerd[1435]: time="2025-09-05T23:53:58.130240979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.130389 containerd[1435]: time="2025-09-05T23:53:58.130304779Z" level=info msg="RemovePodSandbox \"099dd7286c2e45d66d2bb2e937b657120694488b8178015193bdd6a5011fba0e\" returns successfully" Sep 5 23:53:58.131138 containerd[1435]: time="2025-09-05T23:53:58.130817738Z" level=info msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.163 [WARNING][5932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7", Pod:"calico-apiserver-857bdbd65-rfjbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0aa493b499", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.163 [INFO][5932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.163 [INFO][5932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" iface="eth0" netns="" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.163 [INFO][5932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.163 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.187 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.187 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.187 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.195 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.195 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.197 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.200932 containerd[1435]: 2025-09-05 23:53:58.199 [INFO][5932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.201792 containerd[1435]: time="2025-09-05T23:53:58.201453688Z" level=info msg="TearDown network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" successfully" Sep 5 23:53:58.201792 containerd[1435]: time="2025-09-05T23:53:58.201523968Z" level=info msg="StopPodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" returns successfully" Sep 5 23:53:58.202808 containerd[1435]: time="2025-09-05T23:53:58.202428288Z" level=info msg="RemovePodSandbox for \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" Sep 5 23:53:58.202895 containerd[1435]: time="2025-09-05T23:53:58.202833847Z" level=info msg="Forcibly stopping sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\"" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.239 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcbb6ba6-1660-45d6-89cb-7cb16d9b9272", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5a32f38a42130048a5b3cbd5d5796fff55b3f311a8e8d667de630c288a9e1a7", Pod:"calico-apiserver-857bdbd65-rfjbb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0aa493b499", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.239 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.239 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" iface="eth0" netns="" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.239 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.239 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.258 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.258 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.258 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.271 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.271 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" HandleID="k8s-pod-network.2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Workload="localhost-k8s-calico--apiserver--857bdbd65--rfjbb-eth0" Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.273 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.277995 containerd[1435]: 2025-09-05 23:53:58.275 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692" Sep 5 23:53:58.277995 containerd[1435]: time="2025-09-05T23:53:58.277336115Z" level=info msg="TearDown network for sandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" successfully" Sep 5 23:53:58.284056 containerd[1435]: time="2025-09-05T23:53:58.284015350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.284153 containerd[1435]: time="2025-09-05T23:53:58.284089670Z" level=info msg="RemovePodSandbox \"2beafe61b2c1aebc6f244edea615a0e0100bd127aa7f4221dd4b4d84ea9e9692\" returns successfully" Sep 5 23:53:58.284686 containerd[1435]: time="2025-09-05T23:53:58.284658470Z" level=info msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.316 [WARNING][5985] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" WorkloadEndpoint="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.316 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.316 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" iface="eth0" netns="" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.316 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.317 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.335 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.335 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.335 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.344 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.344 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.346 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.349258 containerd[1435]: 2025-09-05 23:53:58.347 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.349258 containerd[1435]: time="2025-09-05T23:53:58.349143064Z" level=info msg="TearDown network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" successfully" Sep 5 23:53:58.349258 containerd[1435]: time="2025-09-05T23:53:58.349167424Z" level=info msg="StopPodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" returns successfully" Sep 5 23:53:58.349671 containerd[1435]: time="2025-09-05T23:53:58.349616384Z" level=info msg="RemovePodSandbox for \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" Sep 5 23:53:58.349671 containerd[1435]: time="2025-09-05T23:53:58.349646384Z" level=info msg="Forcibly stopping sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\"" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.389 [WARNING][6011] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" WorkloadEndpoint="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.390 [INFO][6011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.390 [INFO][6011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" iface="eth0" netns="" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.390 [INFO][6011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.390 [INFO][6011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.408 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.409 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.409 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.417 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.418 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" HandleID="k8s-pod-network.f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Workload="localhost-k8s-whisker--556c969bf5--mkv7h-eth0" Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.419 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.422868 containerd[1435]: 2025-09-05 23:53:58.421 [INFO][6011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b" Sep 5 23:53:58.423313 containerd[1435]: time="2025-09-05T23:53:58.422896212Z" level=info msg="TearDown network for sandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" successfully" Sep 5 23:53:58.427055 containerd[1435]: time="2025-09-05T23:53:58.427020649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.427117 containerd[1435]: time="2025-09-05T23:53:58.427088329Z" level=info msg="RemovePodSandbox \"f18c1fb825763c1b0e565cc243a22741ad904d3053eec63a29c2194746b8896b\" returns successfully" Sep 5 23:53:58.427594 containerd[1435]: time="2025-09-05T23:53:58.427569609Z" level=info msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.457 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxc2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ad477-f492-4278-9581-c6fba1569e81", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2", Pod:"csi-node-driver-nxc2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3d8f108aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.458 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.458 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" iface="eth0" netns="" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.458 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.458 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.474 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.474 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.474 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.482 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.482 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.484 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.487229 containerd[1435]: 2025-09-05 23:53:58.485 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.487999 containerd[1435]: time="2025-09-05T23:53:58.487264606Z" level=info msg="TearDown network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" successfully" Sep 5 23:53:58.487999 containerd[1435]: time="2025-09-05T23:53:58.487288926Z" level=info msg="StopPodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" returns successfully" Sep 5 23:53:58.487999 containerd[1435]: time="2025-09-05T23:53:58.487693486Z" level=info msg="RemovePodSandbox for \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" Sep 5 23:53:58.487999 containerd[1435]: time="2025-09-05T23:53:58.487721046Z" level=info msg="Forcibly stopping sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\"" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.526 [WARNING][6063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxc2c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d08ad477-f492-4278-9581-c6fba1569e81", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93a7069a8987f9f09047292392a2eb4faaa7dc6ee1ba6296b83d1bc626be90e2", Pod:"csi-node-driver-nxc2c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib3d8f108aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.526 [INFO][6063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.526 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" iface="eth0" netns="" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.526 [INFO][6063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.526 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.544 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.544 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.544 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.555 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.555 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" HandleID="k8s-pod-network.99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Workload="localhost-k8s-csi--node--driver--nxc2c-eth0" Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.556 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.560706 containerd[1435]: 2025-09-05 23:53:58.558 [INFO][6063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63" Sep 5 23:53:58.562088 containerd[1435]: time="2025-09-05T23:53:58.561370274Z" level=info msg="TearDown network for sandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" successfully" Sep 5 23:53:58.567275 containerd[1435]: time="2025-09-05T23:53:58.567235950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.567690 containerd[1435]: time="2025-09-05T23:53:58.567583030Z" level=info msg="RemovePodSandbox \"99c24796811f00915c478f3f4b827551bf1a85984e6f72aac6bb6179a0804e63\" returns successfully" Sep 5 23:53:58.568348 containerd[1435]: time="2025-09-05T23:53:58.568055989Z" level=info msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.600 [WARNING][6090] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lrvz2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"35cf1f2e-da60-43ff-8f52-05621e07097d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b", Pod:"goldmane-7988f88666-lrvz2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77df037e22a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.600 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.600 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" iface="eth0" netns="" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.600 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.600 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.619 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.620 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.620 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.628 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.628 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.631 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.634777 containerd[1435]: 2025-09-05 23:53:58.633 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.635431 containerd[1435]: time="2025-09-05T23:53:58.634845982Z" level=info msg="TearDown network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" successfully" Sep 5 23:53:58.635431 containerd[1435]: time="2025-09-05T23:53:58.634872462Z" level=info msg="StopPodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" returns successfully" Sep 5 23:53:58.635431 containerd[1435]: time="2025-09-05T23:53:58.635304182Z" level=info msg="RemovePodSandbox for \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" Sep 5 23:53:58.635431 containerd[1435]: time="2025-09-05T23:53:58.635332462Z" level=info msg="Forcibly stopping sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\"" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.671 [WARNING][6116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--lrvz2-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"35cf1f2e-da60-43ff-8f52-05621e07097d", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e1b3eb61a47634820737df0895c3f1dfceb8268d405c9dd23e365a39c2a082b", Pod:"goldmane-7988f88666-lrvz2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77df037e22a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.671 [INFO][6116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.671 [INFO][6116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" iface="eth0" netns="" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.671 [INFO][6116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.671 [INFO][6116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.689 [INFO][6125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.690 [INFO][6125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.690 [INFO][6125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.699 [WARNING][6125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.699 [INFO][6125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" HandleID="k8s-pod-network.2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Workload="localhost-k8s-goldmane--7988f88666--lrvz2-eth0" Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.700 [INFO][6125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.705606 containerd[1435]: 2025-09-05 23:53:58.702 [INFO][6116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6" Sep 5 23:53:58.705606 containerd[1435]: time="2025-09-05T23:53:58.704484133Z" level=info msg="TearDown network for sandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" successfully" Sep 5 23:53:58.711786 containerd[1435]: time="2025-09-05T23:53:58.711712848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.711911 containerd[1435]: time="2025-09-05T23:53:58.711791528Z" level=info msg="RemovePodSandbox \"2cf7c81427baaa73a5051a288d4f7927cb3af0e5e2f06cca6d1c63bd7b5f38a6\" returns successfully" Sep 5 23:53:58.712284 containerd[1435]: time="2025-09-05T23:53:58.712260047Z" level=info msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.750 [WARNING][6143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"d49fff1f-0b54-40b8-bccc-fc99d8d97bff", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e", Pod:"calico-apiserver-857bdbd65-8z7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144ecf1450b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.750 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.750 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" iface="eth0" netns="" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.750 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.750 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.768 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.768 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.768 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.777 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.777 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.778 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.782038 containerd[1435]: 2025-09-05 23:53:58.780 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.782554 containerd[1435]: time="2025-09-05T23:53:58.782077238Z" level=info msg="TearDown network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" successfully" Sep 5 23:53:58.782554 containerd[1435]: time="2025-09-05T23:53:58.782103798Z" level=info msg="StopPodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" returns successfully" Sep 5 23:53:58.783093 containerd[1435]: time="2025-09-05T23:53:58.782781478Z" level=info msg="RemovePodSandbox for \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" Sep 5 23:53:58.783093 containerd[1435]: time="2025-09-05T23:53:58.782826038Z" level=info msg="Forcibly stopping sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\"" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.816 [WARNING][6171] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0", GenerateName:"calico-apiserver-857bdbd65-", Namespace:"calico-apiserver", SelfLink:"", UID:"d49fff1f-0b54-40b8-bccc-fc99d8d97bff", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"857bdbd65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48ad9150ae8e01bb656ff53f20111c67a86255e7115feeadc4292cccdcdc866e", Pod:"calico-apiserver-857bdbd65-8z7q2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144ecf1450b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.816 [INFO][6171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.816 [INFO][6171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" iface="eth0" netns="" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.816 [INFO][6171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.816 [INFO][6171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.835 [INFO][6179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.835 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.835 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.847 [WARNING][6179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.847 [INFO][6179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" HandleID="k8s-pod-network.cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Workload="localhost-k8s-calico--apiserver--857bdbd65--8z7q2-eth0" Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.849 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:58.856845 containerd[1435]: 2025-09-05 23:53:58.853 [INFO][6171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c" Sep 5 23:53:58.858097 containerd[1435]: time="2025-09-05T23:53:58.856963025Z" level=info msg="TearDown network for sandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" successfully" Sep 5 23:53:58.861673 containerd[1435]: time="2025-09-05T23:53:58.861532902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:58.861673 containerd[1435]: time="2025-09-05T23:53:58.861597702Z" level=info msg="RemovePodSandbox \"cab1f265e8a6d7bf39f187427852470d78d1f9b7834ee5480620fdd41838f07c\" returns successfully" Sep 5 23:54:00.252678 kubelet[2474]: I0905 23:54:00.252608 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:54:02.724041 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:44552.service - OpenSSH per-connection server daemon (10.0.0.1:44552). Sep 5 23:54:02.782510 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 44552 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:02.783565 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:02.790376 systemd-logind[1422]: New session 13 of user core. Sep 5 23:54:02.800728 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:54:02.947974 sshd[6189]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:02.950550 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:44552.service: Deactivated successfully. Sep 5 23:54:02.952242 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:54:02.954497 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:54:02.956887 systemd-logind[1422]: Removed session 13. Sep 5 23:54:07.962006 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:44566.service - OpenSSH per-connection server daemon (10.0.0.1:44566). Sep 5 23:54:08.025522 sshd[6258]: Accepted publickey for core from 10.0.0.1 port 44566 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:08.027079 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:08.034222 systemd-logind[1422]: New session 14 of user core. Sep 5 23:54:08.044673 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:54:08.258216 sshd[6258]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:08.271898 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:44566.service: Deactivated successfully. Sep 5 23:54:08.274031 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:54:08.275591 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:54:08.284788 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:44574.service - OpenSSH per-connection server daemon (10.0.0.1:44574). Sep 5 23:54:08.286169 systemd-logind[1422]: Removed session 14. Sep 5 23:54:08.327394 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 44574 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:08.329715 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:08.334445 systemd-logind[1422]: New session 15 of user core. Sep 5 23:54:08.340679 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:54:08.571138 sshd[6272]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:08.584691 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:44574.service: Deactivated successfully. Sep 5 23:54:08.586306 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:54:08.587827 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:54:08.596825 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:44580.service - OpenSSH per-connection server daemon (10.0.0.1:44580). Sep 5 23:54:08.600931 systemd-logind[1422]: Removed session 15. Sep 5 23:54:08.655694 sshd[6284]: Accepted publickey for core from 10.0.0.1 port 44580 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:08.657440 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:08.661696 systemd-logind[1422]: New session 16 of user core. Sep 5 23:54:08.675723 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:54:10.278255 sshd[6284]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:10.292201 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:49866.service - OpenSSH per-connection server daemon (10.0.0.1:49866). Sep 5 23:54:10.292697 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:44580.service: Deactivated successfully. Sep 5 23:54:10.300440 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:54:10.302378 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:54:10.304563 systemd-logind[1422]: Removed session 16. Sep 5 23:54:10.346027 sshd[6302]: Accepted publickey for core from 10.0.0.1 port 49866 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:10.347440 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:10.353123 systemd-logind[1422]: New session 17 of user core. Sep 5 23:54:10.360686 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:54:10.962948 sshd[6302]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:10.975640 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:49866.service: Deactivated successfully. Sep 5 23:54:10.978097 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:54:10.980057 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:54:10.987191 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:49870.service - OpenSSH per-connection server daemon (10.0.0.1:49870). Sep 5 23:54:10.988321 systemd-logind[1422]: Removed session 17. Sep 5 23:54:11.029754 sshd[6317]: Accepted publickey for core from 10.0.0.1 port 49870 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:11.031149 sshd[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:11.035382 systemd-logind[1422]: New session 18 of user core. Sep 5 23:54:11.042127 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:54:11.176356 sshd[6317]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:11.180254 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:54:11.180550 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:49870.service: Deactivated successfully. Sep 5 23:54:11.182266 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:54:11.182980 systemd-logind[1422]: Removed session 18. Sep 5 23:54:16.193463 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:49882.service - OpenSSH per-connection server daemon (10.0.0.1:49882). Sep 5 23:54:16.244830 sshd[6354]: Accepted publickey for core from 10.0.0.1 port 49882 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:16.246451 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:16.252197 systemd-logind[1422]: New session 19 of user core. Sep 5 23:54:16.270249 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:54:16.414252 sshd[6354]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:16.418211 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:54:16.418519 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:49882.service: Deactivated successfully. Sep 5 23:54:16.420539 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:54:16.422931 systemd-logind[1422]: Removed session 19. Sep 5 23:54:21.429265 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Sep 5 23:54:21.496584 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:21.498294 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:21.502723 systemd-logind[1422]: New session 20 of user core. Sep 5 23:54:21.513702 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:54:21.752028 sshd[6374]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:21.757909 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:34624.service: Deactivated successfully. Sep 5 23:54:21.760215 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:54:21.761194 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:54:21.762381 systemd-logind[1422]: Removed session 20. Sep 5 23:54:23.519078 kubelet[2474]: E0905 23:54:23.518647 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 23:54:26.764498 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). Sep 5 23:54:26.804231 sshd[6419]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 5 23:54:26.805735 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:54:26.810125 systemd-logind[1422]: New session 21 of user core. Sep 5 23:54:26.824636 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:54:27.058576 sshd[6419]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:27.061566 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:34634.service: Deactivated successfully. Sep 5 23:54:27.063432 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:54:27.065211 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:54:27.066275 systemd-logind[1422]: Removed session 21.