May 15 00:31:15.888119 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:31:15.888141 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 14 22:53:13 -00 2025 May 15 00:31:15.888152 kernel: KASLR enabled May 15 00:31:15.888157 kernel: efi: EFI v2.7 by EDK II May 15 00:31:15.888163 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 15 00:31:15.888169 kernel: random: crng init done May 15 00:31:15.888176 kernel: ACPI: Early table checksum verification disabled May 15 00:31:15.888182 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 15 00:31:15.888188 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:31:15.888196 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888202 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888208 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888214 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888220 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888227 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888235 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888241 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888248 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:31:15.888254 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:31:15.888260 kernel: NUMA: Failed to initialise from firmware May 15 00:31:15.888267 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:31:15.888273 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] May 15 00:31:15.888280 kernel: Zone ranges: May 15 00:31:15.888286 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:31:15.888292 kernel: DMA32 empty May 15 00:31:15.888299 kernel: Normal empty May 15 00:31:15.888306 kernel: Movable zone start for each node May 15 00:31:15.888312 kernel: Early memory node ranges May 15 00:31:15.888318 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 00:31:15.888325 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:31:15.888331 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:31:15.888337 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:31:15.888343 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:31:15.888350 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:31:15.888356 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:31:15.888362 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:31:15.888368 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:31:15.888376 kernel: psci: probing for conduit method from ACPI. May 15 00:31:15.888382 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:31:15.888388 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:31:15.888397 kernel: psci: Trusted OS migration not required May 15 00:31:15.888404 kernel: psci: SMC Calling Convention v1.1 May 15 00:31:15.888411 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:31:15.888419 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:31:15.888426 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:31:15.888432 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:31:15.888439 kernel: Detected PIPT I-cache on CPU0 May 15 00:31:15.888446 kernel: CPU features: detected: GIC system register CPU interface May 15 00:31:15.888453 kernel: CPU features: detected: Hardware dirty bit management May 15 00:31:15.888460 kernel: CPU features: detected: Spectre-v4 May 15 00:31:15.888466 kernel: CPU features: detected: Spectre-BHB May 15 00:31:15.888473 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:31:15.888480 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:31:15.888487 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:31:15.888494 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:31:15.888501 kernel: alternatives: applying boot alternatives May 15 00:31:15.888508 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:31:15.888516 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:31:15.888522 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:31:15.888529 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:31:15.888536 kernel: Fallback order for Node 0: 0 May 15 00:31:15.888542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:31:15.888549 kernel: Policy zone: DMA May 15 00:31:15.888556 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:31:15.888563 kernel: software IO TLB: area num 4. May 15 00:31:15.888570 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:31:15.888577 kernel: Memory: 2386412K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185876K reserved, 0K cma-reserved) May 15 00:31:15.888584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:31:15.888591 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:31:15.888598 kernel: rcu: RCU event tracing is enabled. May 15 00:31:15.888604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:31:15.888611 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:31:15.888618 kernel: Tracing variant of Tasks RCU enabled. May 15 00:31:15.888625 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:31:15.888631 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:31:15.888638 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:31:15.888646 kernel: GICv3: 256 SPIs implemented May 15 00:31:15.888652 kernel: GICv3: 0 Extended SPIs implemented May 15 00:31:15.888659 kernel: Root IRQ handler: gic_handle_irq May 15 00:31:15.888666 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:31:15.888672 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:31:15.888679 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:31:15.888686 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:31:15.888693 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:31:15.888699 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:31:15.888706 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:31:15.888713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:31:15.888721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:31:15.888727 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:31:15.888734 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:31:15.888741 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:31:15.888748 kernel: arm-pv: using stolen time PV May 15 00:31:15.888755 kernel: Console: colour dummy device 80x25 May 15 00:31:15.888762 kernel: ACPI: Core revision 20230628 May 15 00:31:15.888769 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:31:15.888777 kernel: pid_max: default: 32768 minimum: 301 May 15 00:31:15.888783 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:31:15.888791 kernel: landlock: Up and running. May 15 00:31:15.888798 kernel: SELinux: Initializing. May 15 00:31:15.888805 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:31:15.888812 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:31:15.888819 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:31:15.888826 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:31:15.888833 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:31:15.888840 kernel: rcu: Hierarchical SRCU implementation. May 15 00:31:15.888847 kernel: rcu: Max phase no-delay instances is 400. May 15 00:31:15.888855 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:31:15.888862 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:31:15.888868 kernel: Remapping and enabling EFI services. May 15 00:31:15.888875 kernel: smp: Bringing up secondary CPUs ... May 15 00:31:15.888882 kernel: Detected PIPT I-cache on CPU1 May 15 00:31:15.888889 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:31:15.888896 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:31:15.888903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:31:15.888910 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:31:15.888917 kernel: Detected PIPT I-cache on CPU2 May 15 00:31:15.888924 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:31:15.888932 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:31:15.888943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:31:15.888951 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:31:15.888958 kernel: Detected PIPT I-cache on CPU3 May 15 00:31:15.888965 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:31:15.888972 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:31:15.888979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:31:15.888986 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:31:15.888994 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:31:15.889009 kernel: SMP: Total of 4 processors activated. May 15 00:31:15.889016 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:31:15.889024 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:31:15.889031 kernel: CPU features: detected: Common not Private translations May 15 00:31:15.889038 kernel: CPU features: detected: CRC32 instructions May 15 00:31:15.889045 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:31:15.889070 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:31:15.889078 kernel: CPU features: detected: LSE atomic instructions May 15 00:31:15.889086 kernel: CPU features: detected: Privileged Access Never May 15 00:31:15.889093 kernel: CPU features: detected: RAS Extension Support May 15 00:31:15.889100 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:31:15.889107 kernel: CPU: All CPU(s) started at EL1 May 15 00:31:15.889114 kernel: alternatives: applying system-wide alternatives May 15 00:31:15.889122 kernel: devtmpfs: initialized May 15 00:31:15.889129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:31:15.889136 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:31:15.889145 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:31:15.889153 kernel: SMBIOS 3.0.0 present. May 15 00:31:15.889160 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 15 00:31:15.889167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:31:15.889174 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:31:15.889182 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:31:15.889189 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:31:15.889196 kernel: audit: initializing netlink subsys (disabled) May 15 00:31:15.889205 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 15 00:31:15.889212 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:31:15.889219 kernel: cpuidle: using governor menu May 15 00:31:15.889226 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:31:15.889233 kernel: ASID allocator initialised with 32768 entries May 15 00:31:15.889241 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:31:15.889248 kernel: Serial: AMBA PL011 UART driver May 15 00:31:15.889255 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:31:15.889262 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:31:15.889271 kernel: Modules: 509008 pages in range for PLT usage May 15 00:31:15.889278 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:31:15.889285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:31:15.889292 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:31:15.889299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:31:15.889306 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:31:15.889314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:31:15.889321 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:31:15.889328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:31:15.889335 kernel: ACPI: Added _OSI(Module Device) May 15 00:31:15.889343 kernel: ACPI: Added _OSI(Processor Device) May 15 00:31:15.889351 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:31:15.889358 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:31:15.889365 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:31:15.889372 kernel: ACPI: Interpreter enabled May 15 00:31:15.889379 kernel: ACPI: Using GIC for interrupt routing May 15 00:31:15.889386 kernel: ACPI: MCFG table detected, 1 entries May 15 00:31:15.889393 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:31:15.889400 kernel: printk: console [ttyAMA0] enabled May 15 00:31:15.889409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:31:15.889543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:31:15.889616 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:31:15.889681 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:31:15.889746 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:31:15.889809 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:31:15.889818 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:31:15.889828 kernel: PCI host bridge to bus 0000:00 May 15 00:31:15.889896 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:31:15.889954 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:31:15.890023 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:31:15.890116 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:31:15.890196 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:31:15.890276 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:31:15.890346 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:31:15.890410 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:31:15.890474 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:31:15.890538 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:31:15.890603 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:31:15.890666 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:31:15.890725 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:31:15.890783 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:31:15.890839 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:31:15.890848 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:31:15.890856 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:31:15.890863 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:31:15.890870 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:31:15.890878 kernel: iommu: Default domain type: Translated May 15 00:31:15.890885 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:31:15.890894 kernel: efivars: Registered efivars operations May 15 00:31:15.890901 kernel: vgaarb: loaded May 15 00:31:15.890908 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:31:15.890915 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:31:15.890923 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:31:15.890930 kernel: pnp: PnP ACPI init May 15 00:31:15.890997 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:31:15.891016 kernel: pnp: PnP ACPI: found 1 devices May 15 00:31:15.891026 kernel: NET: Registered PF_INET protocol family May 15 00:31:15.891033 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:31:15.891041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:31:15.891048 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:31:15.891064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:31:15.891072 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:31:15.891079 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:31:15.891086 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:31:15.891093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:31:15.891103 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:31:15.891110 kernel: PCI: CLS 0 bytes, default 64 May 15 00:31:15.891117 kernel: kvm [1]: HYP mode not available May 15 00:31:15.891124 kernel: Initialise system trusted keyrings May 15 00:31:15.891132 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:31:15.891139 kernel: Key type asymmetric registered May 15 00:31:15.891146 kernel: Asymmetric key parser 'x509' registered May 15 00:31:15.891153 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:31:15.891160 kernel: io scheduler mq-deadline registered May 15 00:31:15.891169 kernel: io scheduler kyber registered May 15 00:31:15.891176 kernel: io scheduler bfq registered May 15 00:31:15.891183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:31:15.891190 kernel: ACPI: button: Power Button [PWRB] May 15 00:31:15.891198 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:31:15.891273 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:31:15.891283 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:31:15.891290 kernel: thunder_xcv, ver 1.0 May 15 00:31:15.891297 kernel: thunder_bgx, ver 1.0 May 15 00:31:15.891307 kernel: nicpf, ver 1.0 May 15 00:31:15.891314 kernel: nicvf, ver 1.0 May 15 00:31:15.891388 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:31:15.891450 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:31:15 UTC (1747269075) May 15 00:31:15.891460 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:31:15.891468 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:31:15.891475 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:31:15.891482 kernel: watchdog: Hard watchdog permanently disabled May 15 00:31:15.891491 kernel: NET: Registered PF_INET6 protocol family May 15 00:31:15.891498 kernel: Segment Routing with IPv6 May 15 00:31:15.891505 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:31:15.891513 kernel: NET: Registered PF_PACKET protocol family May 15 00:31:15.891520 kernel: Key type dns_resolver registered May 15 00:31:15.891527 kernel: registered taskstats version 1 May 15 00:31:15.891534 kernel: Loading compiled-in X.509 certificates May 15 00:31:15.891541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 6afb3c096bffb4980a4bcc170ebe3729821d8e0d' May 15 00:31:15.891548 kernel: Key type .fscrypt registered May 15 00:31:15.891557 kernel: Key type fscrypt-provisioning registered May 15 00:31:15.891564 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:31:15.891571 kernel: ima: Allocated hash algorithm: sha1 May 15 00:31:15.891579 kernel: ima: No architecture policies found May 15 00:31:15.891586 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:31:15.891593 kernel: clk: Disabling unused clocks May 15 00:31:15.891600 kernel: Freeing unused kernel memory: 39424K May 15 00:31:15.891607 kernel: Run /init as init process May 15 00:31:15.891614 kernel: with arguments: May 15 00:31:15.891623 kernel: /init May 15 00:31:15.891630 kernel: with environment: May 15 00:31:15.891637 kernel: HOME=/ May 15 00:31:15.891644 kernel: TERM=linux May 15 00:31:15.891651 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:31:15.891660 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:31:15.891669 systemd[1]: Detected virtualization kvm. May 15 00:31:15.891679 systemd[1]: Detected architecture arm64. May 15 00:31:15.891686 systemd[1]: Running in initrd. May 15 00:31:15.891694 systemd[1]: No hostname configured, using default hostname. May 15 00:31:15.891701 systemd[1]: Hostname set to . May 15 00:31:15.891709 systemd[1]: Initializing machine ID from VM UUID. May 15 00:31:15.891717 systemd[1]: Queued start job for default target initrd.target. May 15 00:31:15.891725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:31:15.891733 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:31:15.891741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:31:15.891750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:31:15.891758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:31:15.891766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:31:15.891775 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:31:15.891782 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:31:15.891790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:31:15.891799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:31:15.891807 systemd[1]: Reached target paths.target - Path Units. May 15 00:31:15.891815 systemd[1]: Reached target slices.target - Slice Units. May 15 00:31:15.891822 systemd[1]: Reached target swap.target - Swaps. May 15 00:31:15.891830 systemd[1]: Reached target timers.target - Timer Units. May 15 00:31:15.891838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:31:15.891845 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:31:15.891853 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:31:15.891861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:31:15.891870 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:31:15.891878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:31:15.891886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:31:15.891893 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:31:15.891901 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:31:15.891909 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:31:15.891916 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:31:15.891924 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:31:15.891932 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:31:15.891941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:31:15.891948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:31:15.891956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:31:15.891964 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:31:15.891971 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:31:15.891981 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:31:15.891989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:31:15.891997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:31:15.892028 systemd-journald[237]: Collecting audit messages is disabled. May 15 00:31:15.892049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:31:15.892094 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:31:15.892110 systemd-journald[237]: Journal started May 15 00:31:15.892129 systemd-journald[237]: Runtime Journal (/run/log/journal/15c4ac46b8b94494ab06abdfdaa186a2) is 5.9M, max 47.3M, 41.4M free. May 15 00:31:15.883190 systemd-modules-load[238]: Inserted module 'overlay' May 15 00:31:15.895350 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:31:15.895382 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:31:15.896288 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 00:31:15.897072 kernel: Bridge firewalling registered May 15 00:31:15.897911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:31:15.911247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:31:15.912743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:31:15.914139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:31:15.915834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:31:15.920261 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:31:15.921459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:31:15.924100 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:31:15.926517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:31:15.934457 dracut-cmdline[276]: dracut-dracut-053 May 15 00:31:15.936825 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:31:15.953077 systemd-resolved[281]: Positive Trust Anchors: May 15 00:31:15.953094 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:31:15.953127 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:31:15.957701 systemd-resolved[281]: Defaulting to hostname 'linux'. May 15 00:31:15.962073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:31:15.963168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:31:16.006097 kernel: SCSI subsystem initialized May 15 00:31:16.011082 kernel: Loading iSCSI transport class v2.0-870. May 15 00:31:16.019090 kernel: iscsi: registered transport (tcp) May 15 00:31:16.032081 kernel: iscsi: registered transport (qla4xxx) May 15 00:31:16.032102 kernel: QLogic iSCSI HBA Driver May 15 00:31:16.079076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:31:16.087206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:31:16.105659 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:31:16.105700 kernel: device-mapper: uevent: version 1.0.3 May 15 00:31:16.105715 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:31:16.152071 kernel: raid6: neonx8 gen() 15785 MB/s May 15 00:31:16.169066 kernel: raid6: neonx4 gen() 15549 MB/s May 15 00:31:16.186068 kernel: raid6: neonx2 gen() 13303 MB/s May 15 00:31:16.203067 kernel: raid6: neonx1 gen() 10479 MB/s May 15 00:31:16.220079 kernel: raid6: int64x8 gen() 6902 MB/s May 15 00:31:16.237069 kernel: raid6: int64x4 gen() 7346 MB/s May 15 00:31:16.254067 kernel: raid6: int64x2 gen() 6127 MB/s May 15 00:31:16.271065 kernel: raid6: int64x1 gen() 5033 MB/s May 15 00:31:16.271082 kernel: raid6: using algorithm neonx8 gen() 15785 MB/s May 15 00:31:16.288084 kernel: raid6: .... xor() 11882 MB/s, rmw enabled May 15 00:31:16.288113 kernel: raid6: using neon recovery algorithm May 15 00:31:16.293078 kernel: xor: measuring software checksum speed May 15 00:31:16.293103 kernel: 8regs : 19812 MB/sec May 15 00:31:16.294493 kernel: 32regs : 17955 MB/sec May 15 00:31:16.294507 kernel: arm64_neon : 26927 MB/sec May 15 00:31:16.294519 kernel: xor: using function: arm64_neon (26927 MB/sec) May 15 00:31:16.348100 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:31:16.360095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:31:16.366194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:31:16.377221 systemd-udevd[464]: Using default interface naming scheme 'v255'. May 15 00:31:16.380843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:31:16.384025 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:31:16.398124 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 15 00:31:16.422694 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:31:16.431203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:31:16.470613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:31:16.477206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:31:16.487944 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:31:16.489347 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:31:16.490949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:31:16.492024 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:31:16.500217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:31:16.510140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:31:16.518089 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:31:16.518386 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:31:16.521591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:31:16.521621 kernel: GPT:9289727 != 19775487 May 15 00:31:16.521631 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:31:16.522877 kernel: GPT:9289727 != 19775487 May 15 00:31:16.522903 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:31:16.522913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:31:16.529376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:31:16.530655 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:31:16.535125 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:31:16.539079 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) May 15 00:31:16.539365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:31:16.539515 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:31:16.543640 kernel: BTRFS: device fsid c82d3215-8134-4516-8c53-9d29a8823a8c devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) May 15 00:31:16.542738 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:31:16.553251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:31:16.564095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:31:16.565098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:31:16.574063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:31:16.580855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:31:16.584574 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:31:16.585479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:31:16.601219 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:31:16.603200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:31:16.606671 disk-uuid[553]: Primary Header is updated. May 15 00:31:16.606671 disk-uuid[553]: Secondary Entries is updated. May 15 00:31:16.606671 disk-uuid[553]: Secondary Header is updated. May 15 00:31:16.610068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:31:16.625173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:31:17.622082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:31:17.622450 disk-uuid[554]: The operation has completed successfully. May 15 00:31:17.645753 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:31:17.645852 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:31:17.676219 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:31:17.679144 sh[573]: Success May 15 00:31:17.691137 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:31:17.719482 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:31:17.742622 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:31:17.746098 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:31:17.755183 kernel: BTRFS info (device dm-0): first mount of filesystem c82d3215-8134-4516-8c53-9d29a8823a8c May 15 00:31:17.755222 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:31:17.755234 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:31:17.757165 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:31:17.757194 kernel: BTRFS info (device dm-0): using free space tree May 15 00:31:17.760503 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:31:17.761924 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:31:17.774236 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:31:17.776599 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:31:17.782924 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:31:17.782963 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:31:17.782974 kernel: BTRFS info (device vda6): using free space tree May 15 00:31:17.785100 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:31:17.792528 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:31:17.794251 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:31:17.799110 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:31:17.806270 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:31:17.886934 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:31:17.905204 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:31:17.923484 ignition[657]: Ignition 2.19.0 May 15 00:31:17.923496 ignition[657]: Stage: fetch-offline May 15 00:31:17.923531 ignition[657]: no configs at "/usr/lib/ignition/base.d" May 15 00:31:17.923539 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:17.923822 ignition[657]: parsed url from cmdline: "" May 15 00:31:17.923826 ignition[657]: no config URL provided May 15 00:31:17.923831 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:31:17.923840 ignition[657]: no config at "/usr/lib/ignition/user.ign" May 15 00:31:17.923864 ignition[657]: op(1): [started] loading QEMU firmware config module May 15 00:31:17.923868 ignition[657]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:31:17.932701 systemd-networkd[767]: lo: Link UP May 15 00:31:17.932711 systemd-networkd[767]: lo: Gained carrier May 15 00:31:17.933447 systemd-networkd[767]: Enumeration completed May 15 00:31:17.933584 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:31:17.936600 ignition[657]: op(1): [finished] loading QEMU firmware config module May 15 00:31:17.934044 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:31:17.934048 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:31:17.934968 systemd-networkd[767]: eth0: Link UP May 15 00:31:17.934971 systemd-networkd[767]: eth0: Gained carrier May 15 00:31:17.934978 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:31:17.935128 systemd[1]: Reached target network.target - Network. May 15 00:31:17.953092 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:31:17.983691 ignition[657]: parsing config with SHA512: 1390789aaa7713a58ac04157f50109f173f6fb63093ec0a1261c37401ece7df86006579643b057ff601fbf7be6028bc0e88d33b851d0f4ac549674fd0c333f76 May 15 00:31:17.990115 unknown[657]: fetched base config from "system" May 15 00:31:17.990130 unknown[657]: fetched user config from "qemu" May 15 00:31:17.991381 ignition[657]: fetch-offline: fetch-offline passed May 15 00:31:17.993211 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:31:17.991469 ignition[657]: Ignition finished successfully May 15 00:31:17.994616 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:31:18.004198 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:31:18.014666 ignition[774]: Ignition 2.19.0 May 15 00:31:18.014676 ignition[774]: Stage: kargs May 15 00:31:18.014837 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 15 00:31:18.014847 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:18.015661 ignition[774]: kargs: kargs passed May 15 00:31:18.018651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:31:18.015702 ignition[774]: Ignition finished successfully May 15 00:31:18.020533 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:31:18.035974 ignition[783]: Ignition 2.19.0 May 15 00:31:18.036001 ignition[783]: Stage: disks May 15 00:31:18.036232 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 15 00:31:18.036241 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:18.038959 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:31:18.037126 ignition[783]: disks: disks passed May 15 00:31:18.040208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:31:18.037175 ignition[783]: Ignition finished successfully May 15 00:31:18.041872 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:31:18.043909 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:31:18.045284 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:31:18.046936 systemd[1]: Reached target basic.target - Basic System. May 15 00:31:18.053216 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:31:18.062286 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:31:18.066310 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:31:18.068732 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:31:18.110885 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:31:18.112478 kernel: EXT4-fs (vda9): mounted filesystem 5a01cbd3-e7cb-4475-87b3-07e348161203 r/w with ordered data mode. Quota mode: none. May 15 00:31:18.112239 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:31:18.128157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:31:18.129951 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:31:18.131098 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:31:18.131206 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:31:18.131241 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:31:18.139973 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) May 15 00:31:18.140003 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:31:18.140015 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:31:18.138023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:31:18.143042 kernel: BTRFS info (device vda6): using free space tree May 15 00:31:18.144072 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:31:18.144147 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:31:18.146036 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:31:18.193189 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:31:18.197139 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 15 00:31:18.201197 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:31:18.204945 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:31:18.278849 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:31:18.290197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:31:18.292587 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:31:18.297087 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:31:18.315019 ignition[914]: INFO : Ignition 2.19.0 May 15 00:31:18.315019 ignition[914]: INFO : Stage: mount May 15 00:31:18.316813 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:31:18.316813 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:18.316813 ignition[914]: INFO : mount: mount passed May 15 00:31:18.316813 ignition[914]: INFO : Ignition finished successfully May 15 00:31:18.316129 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:31:18.318245 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:31:18.330175 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:31:18.754721 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:31:18.772250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:31:18.777072 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) May 15 00:31:18.779468 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:31:18.779489 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:31:18.779501 kernel: BTRFS info (device vda6): using free space tree May 15 00:31:18.782076 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:31:18.783124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:31:18.805548 ignition[944]: INFO : Ignition 2.19.0 May 15 00:31:18.805548 ignition[944]: INFO : Stage: files May 15 00:31:18.807328 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:31:18.807328 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:18.807328 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 15 00:31:18.810579 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:31:18.810579 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:31:18.810579 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:31:18.810579 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:31:18.810579 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:31:18.810579 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 00:31:18.810579 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 00:31:18.809642 unknown[944]: wrote ssh authorized keys file for user: core May 15 00:31:18.863349 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:31:19.068900 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 00:31:19.068900 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 00:31:19.072671 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 00:31:19.432709 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 00:31:19.445333 systemd-networkd[767]: eth0: Gained IPv6LL May 15 00:31:19.860641 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 00:31:19.860641 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 00:31:19.865081 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:31:19.891354 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:31:19.895240 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:31:19.898013 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:31:19.898013 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 00:31:19.898013 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:31:19.898013 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:31:19.898013 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:31:19.898013 ignition[944]: INFO : files: files passed May 15 00:31:19.898013 ignition[944]: INFO : Ignition finished successfully May 15 00:31:19.899664 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:31:19.908434 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:31:19.910149 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:31:19.911340 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:31:19.913098 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:31:19.917381 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:31:19.920450 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:31:19.920450 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:31:19.922843 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:31:19.922757 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:31:19.924196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:31:19.933249 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:31:19.953467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:31:19.954168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:31:19.955832 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:31:19.957778 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:31:19.959685 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:31:19.971224 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:31:19.983935 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:31:20.001244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:31:20.009104 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:31:20.010314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:31:20.012352 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:31:20.014106 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:31:20.014230 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:31:20.016809 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:31:20.018758 systemd[1]: Stopped target basic.target - Basic System. May 15 00:31:20.020160 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:31:20.021885 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:31:20.023712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:31:20.025532 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:31:20.027198 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:31:20.029069 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:31:20.030868 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:31:20.032492 systemd[1]: Stopped target swap.target - Swaps. May 15 00:31:20.033919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:31:20.034051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:31:20.036190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:31:20.038026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:31:20.040117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:31:20.042182 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:31:20.043449 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:31:20.043561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:31:20.046522 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:31:20.046638 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:31:20.048708 systemd[1]: Stopped target paths.target - Path Units. May 15 00:31:20.050358 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:31:20.053256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:31:20.054549 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:31:20.056748 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:31:20.058366 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:31:20.058455 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:31:20.060061 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:31:20.060146 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:31:20.061759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:31:20.061863 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:31:20.063701 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:31:20.063799 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:31:20.071216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:31:20.072716 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:31:20.073723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:31:20.073839 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:31:20.075807 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:31:20.075909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:31:20.082523 ignition[998]: INFO : Ignition 2.19.0 May 15 00:31:20.082523 ignition[998]: INFO : Stage: umount May 15 00:31:20.086322 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:31:20.086322 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:31:20.086322 ignition[998]: INFO : umount: umount passed May 15 00:31:20.086322 ignition[998]: INFO : Ignition finished successfully May 15 00:31:20.082668 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:31:20.082760 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:31:20.086697 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:31:20.088087 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:31:20.093518 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:31:20.093976 systemd[1]: Stopped target network.target - Network. May 15 00:31:20.095088 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:31:20.095146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:31:20.097017 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:31:20.097071 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:31:20.099034 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:31:20.099090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:31:20.100975 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:31:20.101032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:31:20.102979 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:31:20.104812 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:31:20.114103 systemd-networkd[767]: eth0: DHCPv6 lease lost May 15 00:31:20.115579 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:31:20.115702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:31:20.117561 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:31:20.117595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:31:20.135372 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:31:20.136303 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:31:20.136374 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:31:20.142420 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:31:20.148582 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:31:20.148681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:31:20.151918 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:31:20.152034 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:31:20.155069 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:31:20.155127 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:31:20.156305 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:31:20.156356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:31:20.158153 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:31:20.158198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:31:20.160195 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:31:20.160252 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:31:20.162249 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:31:20.162370 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:31:20.164034 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:31:20.164132 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:31:20.165903 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:31:20.165965 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:31:20.166922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:31:20.166956 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:31:20.168609 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:31:20.168651 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:31:20.170624 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:31:20.170666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:31:20.172966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:31:20.173017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:31:20.189208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:31:20.190303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:31:20.190367 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:31:20.192591 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:31:20.192637 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:31:20.194738 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:31:20.194784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:31:20.197059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:31:20.197107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:31:20.199578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:31:20.199658 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:31:20.201870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:31:20.203667 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:31:20.213279 systemd[1]: Switching root. May 15 00:31:20.238513 systemd-journald[237]: Journal stopped May 15 00:31:20.946468 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 00:31:20.946534 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:31:20.946548 kernel: SELinux: policy capability open_perms=1 May 15 00:31:20.946558 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:31:20.946568 kernel: SELinux: policy capability always_check_network=0 May 15 00:31:20.946578 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:31:20.946588 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:31:20.946598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:31:20.946608 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:31:20.946621 kernel: audit: type=1403 audit(1747269080.383:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:31:20.946632 systemd[1]: Successfully loaded SELinux policy in 36.361ms. May 15 00:31:20.946649 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.361ms. May 15 00:31:20.946661 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:31:20.946672 systemd[1]: Detected virtualization kvm. May 15 00:31:20.946683 systemd[1]: Detected architecture arm64. May 15 00:31:20.946696 systemd[1]: Detected first boot. May 15 00:31:20.946707 systemd[1]: Initializing machine ID from VM UUID. May 15 00:31:20.946718 zram_generator::config[1042]: No configuration found. May 15 00:31:20.946732 systemd[1]: Populated /etc with preset unit settings. May 15 00:31:20.946743 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:31:20.946754 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:31:20.946766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:31:20.946777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:31:20.946788 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:31:20.946800 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:31:20.946819 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:31:20.946833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:31:20.946845 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:31:20.946856 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:31:20.946868 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:31:20.946884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:31:20.946896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:31:20.946907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:31:20.946919 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:31:20.946931 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:31:20.946943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:31:20.946955 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:31:20.946966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:31:20.946977 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:31:20.946996 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:31:20.947008 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:31:20.947019 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:31:20.947032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:31:20.947044 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:31:20.947067 systemd[1]: Reached target slices.target - Slice Units. May 15 00:31:20.947079 systemd[1]: Reached target swap.target - Swaps. May 15 00:31:20.947094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:31:20.947106 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:31:20.947117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:31:20.947128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:31:20.947138 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:31:20.947149 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:31:20.947162 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:31:20.947173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:31:20.947184 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:31:20.947194 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:31:20.947206 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:31:20.947217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:31:20.947229 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:31:20.947239 systemd[1]: Reached target machines.target - Containers. May 15 00:31:20.947252 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:31:20.947263 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:31:20.947274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:31:20.947285 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:31:20.947296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:31:20.947307 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:31:20.947318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:31:20.947328 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:31:20.947340 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:31:20.947351 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:31:20.947362 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:31:20.947373 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:31:20.947384 kernel: fuse: init (API version 7.39) May 15 00:31:20.947394 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:31:20.947405 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:31:20.947416 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:31:20.947426 kernel: loop: module loaded May 15 00:31:20.947438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:31:20.947450 kernel: ACPI: bus type drm_connector registered May 15 00:31:20.947460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:31:20.947471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:31:20.947482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:31:20.947493 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:31:20.947504 systemd[1]: Stopped verity-setup.service. May 15 00:31:20.947514 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:31:20.947527 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:31:20.947559 systemd-journald[1117]: Collecting audit messages is disabled. May 15 00:31:20.947586 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:31:20.947597 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:31:20.947608 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:31:20.947621 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:31:20.947633 systemd-journald[1117]: Journal started May 15 00:31:20.947654 systemd-journald[1117]: Runtime Journal (/run/log/journal/15c4ac46b8b94494ab06abdfdaa186a2) is 5.9M, max 47.3M, 41.4M free. May 15 00:31:20.738221 systemd[1]: Queued start job for default target multi-user.target. May 15 00:31:20.759610 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:31:20.759989 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:31:20.950077 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:31:20.952073 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:31:20.953267 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:31:20.954506 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:31:20.956090 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:31:20.957250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:31:20.957383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:31:20.958581 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:31:20.958726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:31:20.959850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:31:20.959996 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:31:20.961436 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:31:20.961571 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:31:20.964432 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:31:20.964590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:31:20.965774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:31:20.968095 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:31:20.969291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:31:20.980805 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:31:20.995199 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:31:20.997097 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:31:20.998182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:31:20.998220 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:31:21.000162 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:31:21.002226 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:31:21.004100 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:31:21.005174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:31:21.006504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:31:21.008171 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:31:21.009440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:31:21.013231 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:31:21.014103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:31:21.015202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:31:21.015488 systemd-journald[1117]: Time spent on flushing to /var/log/journal/15c4ac46b8b94494ab06abdfdaa186a2 is 22.192ms for 855 entries. May 15 00:31:21.015488 systemd-journald[1117]: System Journal (/var/log/journal/15c4ac46b8b94494ab06abdfdaa186a2) is 8.0M, max 195.6M, 187.6M free. May 15 00:31:21.044125 systemd-journald[1117]: Received client request to flush runtime journal. May 15 00:31:21.018317 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:31:21.023119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:31:21.028096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:31:21.029236 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:31:21.030258 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:31:21.031571 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:31:21.032710 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:31:21.036652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:31:21.046130 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:31:21.049296 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:31:21.051351 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:31:21.052918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:31:21.054090 kernel: loop0: detected capacity change from 0 to 114328 May 15 00:31:21.054863 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 15 00:31:21.054882 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 15 00:31:21.064374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:31:21.078201 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:31:21.079251 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:31:21.080767 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:31:21.085166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:31:21.089607 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:31:21.103095 kernel: loop1: detected capacity change from 0 to 114432 May 15 00:31:21.119911 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:31:21.131543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:31:21.139097 kernel: loop2: detected capacity change from 0 to 201592 May 15 00:31:21.143961 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 15 00:31:21.143989 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 15 00:31:21.150140 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:31:21.187082 kernel: loop3: detected capacity change from 0 to 114328 May 15 00:31:21.197090 kernel: loop4: detected capacity change from 0 to 114432 May 15 00:31:21.208074 kernel: loop5: detected capacity change from 0 to 201592 May 15 00:31:21.211799 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:31:21.212252 (sd-merge)[1183]: Merged extensions into '/usr'. May 15 00:31:21.216047 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:31:21.216190 systemd[1]: Reloading... May 15 00:31:21.270078 zram_generator::config[1209]: No configuration found. May 15 00:31:21.318311 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:31:21.357574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:31:21.394013 systemd[1]: Reloading finished in 177 ms. May 15 00:31:21.423500 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:31:21.424619 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:31:21.434237 systemd[1]: Starting ensure-sysext.service... May 15 00:31:21.436026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:31:21.446938 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 15 00:31:21.446953 systemd[1]: Reloading... May 15 00:31:21.460225 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:31:21.460475 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:31:21.461169 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:31:21.461380 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 15 00:31:21.461430 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 15 00:31:21.464085 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:31:21.464189 systemd-tmpfiles[1245]: Skipping /boot May 15 00:31:21.471230 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:31:21.471312 systemd-tmpfiles[1245]: Skipping /boot May 15 00:31:21.496080 zram_generator::config[1272]: No configuration found. May 15 00:31:21.585403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:31:21.622417 systemd[1]: Reloading finished in 175 ms. May 15 00:31:21.641515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:31:21.649438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:31:21.657353 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:31:21.659680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:31:21.661761 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:31:21.665356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:31:21.670348 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:31:21.674401 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:31:21.677757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:31:21.681422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:31:21.688300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:31:21.690912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:31:21.692730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:31:21.693896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:31:21.694524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:31:21.707183 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:31:21.709213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:31:21.709373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:31:21.711362 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:31:21.711560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:31:21.712512 systemd-udevd[1314]: Using default interface naming scheme 'v255'. May 15 00:31:21.718202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:31:21.739701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:31:21.741072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:31:21.741223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:31:21.742874 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:31:21.748350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:31:21.750310 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:31:21.752263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:31:21.752410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:31:21.756291 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:31:21.758952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:31:21.765538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:31:21.768301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:31:21.776448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:31:21.778273 augenrules[1340]: No rules May 15 00:31:21.778765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:31:21.781546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:31:21.782730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:31:21.786508 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:31:21.790368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:31:21.791884 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:31:21.793493 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:31:21.793636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:31:21.808188 systemd[1]: Finished ensure-sysext.service. May 15 00:31:21.811730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:31:21.811887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:31:21.813232 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:31:21.813375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:31:21.814628 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:31:21.814853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:31:21.822823 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:31:21.826439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:31:21.826504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:31:21.834278 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:31:21.835223 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:31:21.835385 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:31:21.869101 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1347) May 15 00:31:21.900825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:31:21.912255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:31:21.931884 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:31:21.932972 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:31:21.950162 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:31:21.954073 systemd-networkd[1362]: lo: Link UP May 15 00:31:21.954079 systemd-networkd[1362]: lo: Gained carrier May 15 00:31:21.954829 systemd-networkd[1362]: Enumeration completed May 15 00:31:21.955144 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:31:21.965223 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:31:21.965234 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:31:21.965536 systemd-resolved[1312]: Positive Trust Anchors: May 15 00:31:21.965553 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:31:21.965586 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:31:21.966344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:31:21.976476 systemd-networkd[1362]: eth0: Link UP May 15 00:31:21.976486 systemd-networkd[1362]: eth0: Gained carrier May 15 00:31:21.976503 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:31:21.978283 systemd-resolved[1312]: Defaulting to hostname 'linux'. May 15 00:31:21.982756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:31:21.989098 systemd[1]: Reached target network.target - Network. May 15 00:31:21.990083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:31:21.992126 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:31:21.993447 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. May 15 00:31:22.487717 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:31:22.487762 systemd-timesyncd[1381]: Initial clock synchronization to Thu 2025-05-15 00:31:22.487617 UTC. May 15 00:31:22.489324 systemd-resolved[1312]: Clock change detected. Flushing caches. May 15 00:31:22.494416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:31:22.496109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:31:22.499757 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:31:22.518346 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:31:22.535828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:31:22.558841 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:31:22.560357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:31:22.561475 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:31:22.562598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:31:22.563825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:31:22.565271 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:31:22.566460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:31:22.567668 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:31:22.568888 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:31:22.568928 systemd[1]: Reached target paths.target - Path Units. May 15 00:31:22.569831 systemd[1]: Reached target timers.target - Timer Units. May 15 00:31:22.572033 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:31:22.574552 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:31:22.586307 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:31:22.588754 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:31:22.590333 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:31:22.591481 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:31:22.592486 systemd[1]: Reached target basic.target - Basic System. May 15 00:31:22.593437 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:31:22.593471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:31:22.594382 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:31:22.596400 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:31:22.597322 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:31:22.598768 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:31:22.605587 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:31:22.606345 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:31:22.612835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:31:22.616626 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:31:22.624290 jq[1414]: false May 15 00:31:22.619513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:31:22.622779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:31:22.627174 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:31:22.629119 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:31:22.629880 extend-filesystems[1415]: Found loop3 May 15 00:31:22.630937 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:31:22.632041 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:31:22.632627 extend-filesystems[1415]: Found loop4 May 15 00:31:22.633806 extend-filesystems[1415]: Found loop5 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda May 15 00:31:22.633806 extend-filesystems[1415]: Found vda1 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda2 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda3 May 15 00:31:22.633806 extend-filesystems[1415]: Found usr May 15 00:31:22.633806 extend-filesystems[1415]: Found vda4 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda6 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda7 May 15 00:31:22.633806 extend-filesystems[1415]: Found vda9 May 15 00:31:22.633806 extend-filesystems[1415]: Checking size of /dev/vda9 May 15 00:31:22.635405 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:31:22.647890 dbus-daemon[1413]: [system] SELinux support is enabled May 15 00:31:22.640280 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:31:22.643706 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:31:22.643903 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:31:22.666280 jq[1428]: true May 15 00:31:22.645040 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:31:22.645205 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:31:22.648193 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:31:22.659804 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:31:22.659843 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:31:22.664390 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:31:22.664420 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:31:22.666965 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:31:22.669313 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:31:22.671449 jq[1441]: true May 15 00:31:22.679066 extend-filesystems[1415]: Resized partition /dev/vda9 May 15 00:31:22.681390 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:31:22.683634 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 15 00:31:22.686914 tar[1434]: linux-arm64/LICENSE May 15 00:31:22.686914 tar[1434]: linux-arm64/helm May 15 00:31:22.694820 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:31:22.694887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1371) May 15 00:31:22.694927 update_engine[1425]: I20250515 00:31:22.689746 1425 main.cc:92] Flatcar Update Engine starting May 15 00:31:22.697096 update_engine[1425]: I20250515 00:31:22.696938 1425 update_check_scheduler.cc:74] Next update check in 4m15s May 15 00:31:22.697369 systemd[1]: Started update-engine.service - Update Engine. May 15 00:31:22.711446 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:31:22.724898 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:31:22.726122 systemd-logind[1422]: New seat seat0. May 15 00:31:22.727495 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:31:22.742268 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:31:22.783437 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:31:22.783437 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:31:22.783437 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:31:22.791122 extend-filesystems[1415]: Resized filesystem in /dev/vda9 May 15 00:31:22.785359 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:31:22.787483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:31:22.794568 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 15 00:31:22.798419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:31:22.803031 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:31:22.818953 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:31:22.926697 containerd[1443]: time="2025-05-15T00:31:22.926377286Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 15 00:31:22.952071 containerd[1443]: time="2025-05-15T00:31:22.952018646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.953512 containerd[1443]: time="2025-05-15T00:31:22.953470166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953659606Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953686966Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953858086Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953889486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953970206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:31:22.954050 containerd[1443]: time="2025-05-15T00:31:22.953984646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.954553 containerd[1443]: time="2025-05-15T00:31:22.954466726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:31:22.954624 containerd[1443]: time="2025-05-15T00:31:22.954609446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.954736 containerd[1443]: time="2025-05-15T00:31:22.954717846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:31:22.954798 containerd[1443]: time="2025-05-15T00:31:22.954784126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.955008 containerd[1443]: time="2025-05-15T00:31:22.954987446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.955483 containerd[1443]: time="2025-05-15T00:31:22.955443166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:31:22.956008 containerd[1443]: time="2025-05-15T00:31:22.955752326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:31:22.956008 containerd[1443]: time="2025-05-15T00:31:22.955774326Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:31:22.956008 containerd[1443]: time="2025-05-15T00:31:22.955904486Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:31:22.956008 containerd[1443]: time="2025-05-15T00:31:22.955974006Z" level=info msg="metadata content store policy set" policy=shared May 15 00:31:22.965373 containerd[1443]: time="2025-05-15T00:31:22.965342886Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:31:22.965864 containerd[1443]: time="2025-05-15T00:31:22.965583486Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:31:22.965864 containerd[1443]: time="2025-05-15T00:31:22.965609846Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:31:22.965864 containerd[1443]: time="2025-05-15T00:31:22.965628206Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:31:22.965864 containerd[1443]: time="2025-05-15T00:31:22.965643406Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:31:22.965864 containerd[1443]: time="2025-05-15T00:31:22.965814166Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:31:22.966451 containerd[1443]: time="2025-05-15T00:31:22.966428166Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:31:22.966709 containerd[1443]: time="2025-05-15T00:31:22.966685126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966789646Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966810446Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966826966Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966850246Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966863766Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966878886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966895366Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966907846Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966927126Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966940246Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966970406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966986326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.966999726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967486 containerd[1443]: time="2025-05-15T00:31:22.967013126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967027526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967041486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967053366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967068566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967081566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967095886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967108366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967121726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967135406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967151726Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967187846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967204526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:31:22.967776 containerd[1443]: time="2025-05-15T00:31:22.967216366Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968402926Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968439886Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968452246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968464886Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968474286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968487566Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968497966Z" level=info msg="NRI interface is disabled by configuration." May 15 00:31:22.968856 containerd[1443]: time="2025-05-15T00:31:22.968508246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:31:22.969455 containerd[1443]: time="2025-05-15T00:31:22.969387646Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.969879966Z" level=info msg="Connect containerd service" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.971021046Z" level=info msg="using legacy CRI server" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.971040726Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.971213886Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.972075846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:31:22.972262 containerd[1443]: time="2025-05-15T00:31:22.972206006Z" level=info msg="Start subscribing containerd event" May 15 00:31:22.972410 containerd[1443]: time="2025-05-15T00:31:22.972268966Z" level=info msg="Start recovering state" May 15 00:31:22.972410 containerd[1443]: time="2025-05-15T00:31:22.972350846Z" level=info msg="Start event monitor" May 15 00:31:22.972410 containerd[1443]: time="2025-05-15T00:31:22.972363726Z" level=info msg="Start snapshots syncer" May 15 00:31:22.972410 containerd[1443]: time="2025-05-15T00:31:22.972374286Z" level=info msg="Start cni network conf syncer for default" May 15 00:31:22.972410 containerd[1443]: time="2025-05-15T00:31:22.972396326Z" level=info msg="Start streaming server" May 15 00:31:22.973312 containerd[1443]: time="2025-05-15T00:31:22.973287726Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:31:22.973499 containerd[1443]: time="2025-05-15T00:31:22.973481526Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:31:22.973924 containerd[1443]: time="2025-05-15T00:31:22.973908126Z" level=info msg="containerd successfully booted in 0.049006s" May 15 00:31:22.974002 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:31:23.117270 tar[1434]: linux-arm64/README.md May 15 00:31:23.129271 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:31:23.646181 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:31:23.664624 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:31:23.674448 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:31:23.680583 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:31:23.680794 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:31:23.685458 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:31:23.695325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:31:23.698525 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:31:23.700928 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:31:23.702585 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:31:23.905363 systemd-networkd[1362]: eth0: Gained IPv6LL May 15 00:31:23.907948 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:31:23.909705 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:31:23.921466 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:31:23.923824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:23.925741 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:31:23.939371 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:31:23.940472 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:31:23.941849 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:31:23.942785 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:31:24.453720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:24.455505 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:31:24.457834 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:31:24.462512 systemd[1]: Startup finished in 546ms (kernel) + 4.668s (initrd) + 3.628s (userspace) = 8.844s. May 15 00:31:24.853090 kubelet[1526]: E0515 00:31:24.852972 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:31:24.855311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:31:24.855463 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:31:28.538938 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:31:28.539999 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:56784.service - OpenSSH per-connection server daemon (10.0.0.1:56784). May 15 00:31:28.587014 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 56784 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:28.588901 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:28.600694 systemd-logind[1422]: New session 1 of user core. May 15 00:31:28.601672 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:31:28.621489 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:31:28.630714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:31:28.632949 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:31:28.640125 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:31:28.713307 systemd[1544]: Queued start job for default target default.target. May 15 00:31:28.724171 systemd[1544]: Created slice app.slice - User Application Slice. May 15 00:31:28.724203 systemd[1544]: Reached target paths.target - Paths. May 15 00:31:28.724217 systemd[1544]: Reached target timers.target - Timers. May 15 00:31:28.725647 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:31:28.735744 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:31:28.735808 systemd[1544]: Reached target sockets.target - Sockets. May 15 00:31:28.735821 systemd[1544]: Reached target basic.target - Basic System. May 15 00:31:28.735856 systemd[1544]: Reached target default.target - Main User Target. May 15 00:31:28.735884 systemd[1544]: Startup finished in 90ms. May 15 00:31:28.736121 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:31:28.737430 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:31:28.801068 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:56798.service - OpenSSH per-connection server daemon (10.0.0.1:56798). May 15 00:31:28.846564 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 56798 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:28.848052 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:28.852315 systemd-logind[1422]: New session 2 of user core. May 15 00:31:28.860372 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:31:28.911866 sshd[1555]: pam_unix(sshd:session): session closed for user core May 15 00:31:28.926563 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:56798.service: Deactivated successfully. May 15 00:31:28.928198 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:31:28.929624 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. May 15 00:31:28.944504 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:56814.service - OpenSSH per-connection server daemon (10.0.0.1:56814). May 15 00:31:28.945385 systemd-logind[1422]: Removed session 2. May 15 00:31:28.982869 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 56814 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:28.984095 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:28.988363 systemd-logind[1422]: New session 3 of user core. May 15 00:31:29.007417 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:31:29.055097 sshd[1562]: pam_unix(sshd:session): session closed for user core May 15 00:31:29.073648 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:56814.service: Deactivated successfully. May 15 00:31:29.074934 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:31:29.077233 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. May 15 00:31:29.078250 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:56828.service - OpenSSH per-connection server daemon (10.0.0.1:56828). May 15 00:31:29.078990 systemd-logind[1422]: Removed session 3. May 15 00:31:29.115276 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 56828 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:29.116451 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:29.120297 systemd-logind[1422]: New session 4 of user core. May 15 00:31:29.126382 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:31:29.177949 sshd[1569]: pam_unix(sshd:session): session closed for user core May 15 00:31:29.194562 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:56828.service: Deactivated successfully. May 15 00:31:29.196481 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:31:29.197621 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. May 15 00:31:29.198686 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:56842.service - OpenSSH per-connection server daemon (10.0.0.1:56842). May 15 00:31:29.199380 systemd-logind[1422]: Removed session 4. May 15 00:31:29.234947 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 56842 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:29.236148 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:29.239767 systemd-logind[1422]: New session 5 of user core. May 15 00:31:29.252378 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:31:29.316946 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:31:29.317778 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:31:29.338147 sudo[1579]: pam_unix(sudo:session): session closed for user root May 15 00:31:29.340249 sshd[1576]: pam_unix(sshd:session): session closed for user core May 15 00:31:29.350799 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:56842.service: Deactivated successfully. May 15 00:31:29.352169 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:31:29.353488 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. May 15 00:31:29.354678 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:56852.service - OpenSSH per-connection server daemon (10.0.0.1:56852). May 15 00:31:29.356566 systemd-logind[1422]: Removed session 5. May 15 00:31:29.391836 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 56852 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:29.393124 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:29.396625 systemd-logind[1422]: New session 6 of user core. May 15 00:31:29.413391 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:31:29.464798 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:31:29.465061 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:31:29.468442 sudo[1588]: pam_unix(sudo:session): session closed for user root May 15 00:31:29.472662 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:31:29.472916 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:31:29.489480 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 15 00:31:29.490676 auditctl[1591]: No rules May 15 00:31:29.491527 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:31:29.492327 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 15 00:31:29.493998 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:31:29.517534 augenrules[1609]: No rules May 15 00:31:29.519327 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:31:29.520644 sudo[1587]: pam_unix(sudo:session): session closed for user root May 15 00:31:29.522040 sshd[1584]: pam_unix(sshd:session): session closed for user core May 15 00:31:29.531868 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:56852.service: Deactivated successfully. May 15 00:31:29.533615 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:31:29.536034 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. May 15 00:31:29.544916 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:56864.service - OpenSSH per-connection server daemon (10.0.0.1:56864). May 15 00:31:29.546072 systemd-logind[1422]: Removed session 6. May 15 00:31:29.577676 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 56864 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:31:29.578770 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:31:29.582585 systemd-logind[1422]: New session 7 of user core. May 15 00:31:29.593413 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:31:29.643319 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:31:29.643582 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:31:29.930547 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:31:29.930630 (dockerd)[1639]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:31:30.185478 dockerd[1639]: time="2025-05-15T00:31:30.185351086Z" level=info msg="Starting up" May 15 00:31:30.332265 dockerd[1639]: time="2025-05-15T00:31:30.332193966Z" level=info msg="Loading containers: start." May 15 00:31:30.415298 kernel: Initializing XFRM netlink socket May 15 00:31:30.484024 systemd-networkd[1362]: docker0: Link UP May 15 00:31:30.506448 dockerd[1639]: time="2025-05-15T00:31:30.506399446Z" level=info msg="Loading containers: done." May 15 00:31:30.518522 dockerd[1639]: time="2025-05-15T00:31:30.518477886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:31:30.518687 dockerd[1639]: time="2025-05-15T00:31:30.518566966Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 15 00:31:30.518687 dockerd[1639]: time="2025-05-15T00:31:30.518659406Z" level=info msg="Daemon has completed initialization" May 15 00:31:30.547284 dockerd[1639]: time="2025-05-15T00:31:30.547149286Z" level=info msg="API listen on /run/docker.sock" May 15 00:31:30.547479 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:31:31.184420 containerd[1443]: time="2025-05-15T00:31:31.184382846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 00:31:31.315859 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4260966515-merged.mount: Deactivated successfully. May 15 00:31:31.829168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360375589.mount: Deactivated successfully. May 15 00:31:33.109731 containerd[1443]: time="2025-05-15T00:31:33.109048846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:33.109731 containerd[1443]: time="2025-05-15T00:31:33.109609686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 15 00:31:33.110583 containerd[1443]: time="2025-05-15T00:31:33.110545926Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:33.113521 containerd[1443]: time="2025-05-15T00:31:33.113481166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:33.115110 containerd[1443]: time="2025-05-15T00:31:33.114781846Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.93035548s" May 15 00:31:33.115110 containerd[1443]: time="2025-05-15T00:31:33.114834766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 00:31:33.115706 containerd[1443]: time="2025-05-15T00:31:33.115680406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 00:31:34.201851 containerd[1443]: time="2025-05-15T00:31:34.201795926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:34.205282 containerd[1443]: time="2025-05-15T00:31:34.202584846Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 15 00:31:34.206896 containerd[1443]: time="2025-05-15T00:31:34.206850606Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:34.210560 containerd[1443]: time="2025-05-15T00:31:34.210500126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:34.211632 containerd[1443]: time="2025-05-15T00:31:34.211595126Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.09580344s" May 15 00:31:34.211823 containerd[1443]: time="2025-05-15T00:31:34.211726046Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 00:31:34.212361 containerd[1443]: time="2025-05-15T00:31:34.212337126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 00:31:35.105706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:31:35.116424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:35.218851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:35.222544 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:31:35.289441 kubelet[1856]: E0515 00:31:35.289383 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:31:35.292511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:31:35.292654 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:31:35.325467 containerd[1443]: time="2025-05-15T00:31:35.325413766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:35.325977 containerd[1443]: time="2025-05-15T00:31:35.325933766Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 15 00:31:35.326816 containerd[1443]: time="2025-05-15T00:31:35.326764086Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:35.329523 containerd[1443]: time="2025-05-15T00:31:35.329472646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:35.330765 containerd[1443]: time="2025-05-15T00:31:35.330710606Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.11834084s" May 15 00:31:35.330765 containerd[1443]: time="2025-05-15T00:31:35.330743486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 00:31:35.331485 containerd[1443]: time="2025-05-15T00:31:35.331327086Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 00:31:36.208081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548716832.mount: Deactivated successfully. May 15 00:31:36.575499 containerd[1443]: time="2025-05-15T00:31:36.575335406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:36.575910 containerd[1443]: time="2025-05-15T00:31:36.575796366Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 15 00:31:36.576669 containerd[1443]: time="2025-05-15T00:31:36.576634846Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:36.580636 containerd[1443]: time="2025-05-15T00:31:36.578931926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:36.581631 containerd[1443]: time="2025-05-15T00:31:36.581595726Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.2502384s" May 15 00:31:36.581735 containerd[1443]: time="2025-05-15T00:31:36.581716646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 00:31:36.582384 containerd[1443]: time="2025-05-15T00:31:36.582356726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 00:31:37.136852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874684163.mount: Deactivated successfully. May 15 00:31:37.807602 containerd[1443]: time="2025-05-15T00:31:37.807557846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:37.808557 containerd[1443]: time="2025-05-15T00:31:37.807976366Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 00:31:37.809265 containerd[1443]: time="2025-05-15T00:31:37.809134646Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:37.815295 containerd[1443]: time="2025-05-15T00:31:37.814783526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:37.816607 containerd[1443]: time="2025-05-15T00:31:37.816563886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.23416448s" May 15 00:31:37.816607 containerd[1443]: time="2025-05-15T00:31:37.816602766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 00:31:37.817436 containerd[1443]: time="2025-05-15T00:31:37.817223486Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:31:38.332826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242828327.mount: Deactivated successfully. May 15 00:31:38.336398 containerd[1443]: time="2025-05-15T00:31:38.336352486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:38.337447 containerd[1443]: time="2025-05-15T00:31:38.337415286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 00:31:38.338224 containerd[1443]: time="2025-05-15T00:31:38.338189006Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:38.340383 containerd[1443]: time="2025-05-15T00:31:38.340335046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:38.341961 containerd[1443]: time="2025-05-15T00:31:38.341926766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 524.6556ms" May 15 00:31:38.341961 containerd[1443]: time="2025-05-15T00:31:38.341962206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 00:31:38.342387 containerd[1443]: time="2025-05-15T00:31:38.342365566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 00:31:38.876032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135325096.mount: Deactivated successfully. May 15 00:31:40.753974 containerd[1443]: time="2025-05-15T00:31:40.753913486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:40.755078 containerd[1443]: time="2025-05-15T00:31:40.754768566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 15 00:31:40.755924 containerd[1443]: time="2025-05-15T00:31:40.755882486Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:40.759773 containerd[1443]: time="2025-05-15T00:31:40.759732326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:31:40.760600 containerd[1443]: time="2025-05-15T00:31:40.760560366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.41816248s" May 15 00:31:40.760645 containerd[1443]: time="2025-05-15T00:31:40.760598406Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 00:31:45.542910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:31:45.552437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:45.564230 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 00:31:45.564338 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 00:31:45.565289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:45.579498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:45.602548 systemd[1]: Reloading requested from client PID 2018 ('systemctl') (unit session-7.scope)... May 15 00:31:45.602566 systemd[1]: Reloading... May 15 00:31:45.662272 zram_generator::config[2057]: No configuration found. May 15 00:31:45.779623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:31:45.841341 systemd[1]: Reloading finished in 238 ms. May 15 00:31:45.882077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:45.885447 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:31:45.885629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:45.887037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:45.985966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:45.990290 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:31:46.027667 kubelet[2104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:31:46.027992 kubelet[2104]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:31:46.028036 kubelet[2104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:31:46.028206 kubelet[2104]: I0515 00:31:46.028173 2104 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:31:46.843798 kubelet[2104]: I0515 00:31:46.842201 2104 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:31:46.843798 kubelet[2104]: I0515 00:31:46.842234 2104 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:31:46.843798 kubelet[2104]: I0515 00:31:46.842640 2104 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:31:46.864421 kubelet[2104]: E0515 00:31:46.864384 2104 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:46.865687 kubelet[2104]: I0515 00:31:46.865672 2104 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:31:46.875448 kubelet[2104]: E0515 00:31:46.875417 2104 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:31:46.875448 kubelet[2104]: I0515 00:31:46.875448 2104 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:31:46.878001 kubelet[2104]: I0515 00:31:46.877966 2104 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:31:46.878231 kubelet[2104]: I0515 00:31:46.878204 2104 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:31:46.878401 kubelet[2104]: I0515 00:31:46.878233 2104 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:31:46.878489 kubelet[2104]: I0515 00:31:46.878475 2104 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:31:46.878489 kubelet[2104]: I0515 00:31:46.878487 2104 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:31:46.878675 kubelet[2104]: I0515 00:31:46.878662 2104 state_mem.go:36] "Initialized new in-memory state store" May 15 00:31:46.881121 kubelet[2104]: I0515 00:31:46.881093 2104 kubelet.go:446] "Attempting to sync node with API server" May 15 00:31:46.881121 kubelet[2104]: I0515 00:31:46.881117 2104 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:31:46.881189 kubelet[2104]: I0515 00:31:46.881136 2104 kubelet.go:352] "Adding apiserver pod source" May 15 00:31:46.881189 kubelet[2104]: I0515 00:31:46.881146 2104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:31:46.887287 kubelet[2104]: W0515 00:31:46.886801 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:46.887287 kubelet[2104]: E0515 00:31:46.886867 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:46.887287 kubelet[2104]: I0515 00:31:46.886931 2104 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:31:46.887397 kubelet[2104]: W0515 00:31:46.887324 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:46.887397 kubelet[2104]: E0515 00:31:46.887364 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:46.887579 kubelet[2104]: I0515 00:31:46.887558 2104 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:31:46.887688 kubelet[2104]: W0515 00:31:46.887672 2104 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:31:46.889310 kubelet[2104]: I0515 00:31:46.889289 2104 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:31:46.889444 kubelet[2104]: I0515 00:31:46.889434 2104 server.go:1287] "Started kubelet" May 15 00:31:46.890181 kubelet[2104]: I0515 00:31:46.890145 2104 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:31:46.892123 kubelet[2104]: I0515 00:31:46.891812 2104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:31:46.892378 kubelet[2104]: I0515 00:31:46.892358 2104 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:31:46.892798 kubelet[2104]: I0515 00:31:46.892766 2104 server.go:490] "Adding debug handlers to kubelet server" May 15 00:31:46.892919 kubelet[2104]: I0515 00:31:46.892896 2104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:31:46.894327 kubelet[2104]: I0515 00:31:46.894300 2104 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:31:46.895753 kubelet[2104]: E0515 00:31:46.895721 2104 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:31:46.895795 kubelet[2104]: I0515 00:31:46.895763 2104 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:31:46.898876 kubelet[2104]: I0515 00:31:46.898696 2104 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:31:46.899101 kubelet[2104]: W0515 00:31:46.899001 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:46.899101 kubelet[2104]: E0515 00:31:46.899049 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:46.899199 kubelet[2104]: E0515 00:31:46.898853 2104 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8bfecb280b16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:31:46.889394966 +0000 UTC m=+0.895762881,LastTimestamp:2025-05-15 00:31:46.889394966 +0000 UTC m=+0.895762881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:31:46.899735 kubelet[2104]: I0515 00:31:46.899252 2104 reconciler.go:26] "Reconciler: start to sync state" May 15 00:31:46.899735 kubelet[2104]: E0515 00:31:46.899477 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" May 15 00:31:46.900197 kubelet[2104]: E0515 00:31:46.900026 2104 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:31:46.900519 kubelet[2104]: I0515 00:31:46.900499 2104 factory.go:221] Registration of the containerd container factory successfully May 15 00:31:46.900519 kubelet[2104]: I0515 00:31:46.900516 2104 factory.go:221] Registration of the systemd container factory successfully May 15 00:31:46.900625 kubelet[2104]: I0515 00:31:46.900605 2104 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:31:46.911335 kubelet[2104]: I0515 00:31:46.911309 2104 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:31:46.911335 kubelet[2104]: I0515 00:31:46.911331 2104 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:31:46.911428 kubelet[2104]: I0515 00:31:46.911348 2104 state_mem.go:36] "Initialized new in-memory state store" May 15 00:31:46.914115 kubelet[2104]: I0515 00:31:46.914054 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:31:46.915125 kubelet[2104]: I0515 00:31:46.915089 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:31:46.915125 kubelet[2104]: I0515 00:31:46.915116 2104 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:31:46.915206 kubelet[2104]: I0515 00:31:46.915133 2104 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:31:46.915206 kubelet[2104]: I0515 00:31:46.915147 2104 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:31:46.915206 kubelet[2104]: E0515 00:31:46.915195 2104 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:31:46.996229 kubelet[2104]: E0515 00:31:46.996183 2104 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:31:47.015576 kubelet[2104]: E0515 00:31:47.015526 2104 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:31:47.017599 kubelet[2104]: I0515 00:31:47.017566 2104 policy_none.go:49] "None policy: Start" May 15 00:31:47.017599 kubelet[2104]: I0515 00:31:47.017596 2104 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:31:47.017599 kubelet[2104]: I0515 00:31:47.017610 2104 state_mem.go:35] "Initializing new in-memory state store" May 15 00:31:47.018063 kubelet[2104]: W0515 00:31:47.017970 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:47.018063 kubelet[2104]: E0515 00:31:47.018030 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:47.022751 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:31:47.034799 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:31:47.038482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:31:47.049032 kubelet[2104]: I0515 00:31:47.048992 2104 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:31:47.049275 kubelet[2104]: I0515 00:31:47.049183 2104 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:31:47.049275 kubelet[2104]: I0515 00:31:47.049198 2104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:31:47.049519 kubelet[2104]: I0515 00:31:47.049491 2104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:31:47.050751 kubelet[2104]: E0515 00:31:47.050692 2104 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:31:47.050751 kubelet[2104]: E0515 00:31:47.050742 2104 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:31:47.101177 kubelet[2104]: E0515 00:31:47.100527 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" May 15 00:31:47.150638 kubelet[2104]: I0515 00:31:47.150608 2104 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:47.151076 kubelet[2104]: E0515 00:31:47.151027 2104 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 15 00:31:47.224601 systemd[1]: Created slice kubepods-burstable-poda0afe397fe085fcf4becb1d1b0c55c5f.slice - libcontainer container kubepods-burstable-poda0afe397fe085fcf4becb1d1b0c55c5f.slice. May 15 00:31:47.246509 kubelet[2104]: E0515 00:31:47.246469 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:47.249036 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 00:31:47.260397 kubelet[2104]: E0515 00:31:47.260364 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:47.263406 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 00:31:47.264813 kubelet[2104]: E0515 00:31:47.264788 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:47.302151 kubelet[2104]: I0515 00:31:47.302123 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:47.302217 kubelet[2104]: I0515 00:31:47.302155 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:47.302217 kubelet[2104]: I0515 00:31:47.302179 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:47.302217 kubelet[2104]: I0515 00:31:47.302193 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:47.302217 kubelet[2104]: I0515 00:31:47.302209 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:47.302335 kubelet[2104]: I0515 00:31:47.302227 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:31:47.302335 kubelet[2104]: I0515 00:31:47.302256 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:47.302335 kubelet[2104]: I0515 00:31:47.302284 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:47.302335 kubelet[2104]: I0515 00:31:47.302299 2104 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:47.353738 kubelet[2104]: I0515 00:31:47.353182 2104 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:47.353738 kubelet[2104]: E0515 00:31:47.353572 2104 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 15 00:31:47.501413 kubelet[2104]: E0515 00:31:47.501372 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" May 15 00:31:47.547864 kubelet[2104]: E0515 00:31:47.547813 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:47.548464 containerd[1443]: time="2025-05-15T00:31:47.548418366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0afe397fe085fcf4becb1d1b0c55c5f,Namespace:kube-system,Attempt:0,}" May 15 00:31:47.561695 kubelet[2104]: E0515 00:31:47.561656 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:47.562084 containerd[1443]: time="2025-05-15T00:31:47.562038926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 00:31:47.565500 kubelet[2104]: E0515 00:31:47.565474 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:47.565823 containerd[1443]: time="2025-05-15T00:31:47.565793806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 00:31:47.740676 kubelet[2104]: W0515 00:31:47.740564 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:47.740676 kubelet[2104]: E0515 00:31:47.740638 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:47.758309 kubelet[2104]: I0515 00:31:47.758266 2104 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:47.758716 kubelet[2104]: E0515 00:31:47.758689 2104 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 15 00:31:48.081983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646309864.mount: Deactivated successfully. May 15 00:31:48.087706 containerd[1443]: time="2025-05-15T00:31:48.087659206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:31:48.089457 containerd[1443]: time="2025-05-15T00:31:48.089422606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:31:48.091154 containerd[1443]: time="2025-05-15T00:31:48.091123846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 00:31:48.091793 containerd[1443]: time="2025-05-15T00:31:48.091770246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:31:48.093454 containerd[1443]: time="2025-05-15T00:31:48.093407126Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:31:48.094263 containerd[1443]: time="2025-05-15T00:31:48.094207286Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:31:48.095294 containerd[1443]: time="2025-05-15T00:31:48.095256606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:31:48.096846 containerd[1443]: time="2025-05-15T00:31:48.096810166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:31:48.099650 containerd[1443]: time="2025-05-15T00:31:48.099559646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.05932ms" May 15 00:31:48.101258 containerd[1443]: time="2025-05-15T00:31:48.101187206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.331ms" May 15 00:31:48.103471 containerd[1443]: time="2025-05-15T00:31:48.103434126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 541.31916ms" May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.221806166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.221855406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.221866606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.221941846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.221614726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.222005046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:31:48.222026 containerd[1443]: time="2025-05-15T00:31:48.222018966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.222438 containerd[1443]: time="2025-05-15T00:31:48.222146686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.223887 containerd[1443]: time="2025-05-15T00:31:48.223806246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:31:48.223887 containerd[1443]: time="2025-05-15T00:31:48.223863686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:31:48.223887 containerd[1443]: time="2025-05-15T00:31:48.223879086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.223998 containerd[1443]: time="2025-05-15T00:31:48.223952726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:48.241393 systemd[1]: Started cri-containerd-1da768dbd4cc8f200e064081710a92064e296c6aae0e736c409b4cc87f1f4afe.scope - libcontainer container 1da768dbd4cc8f200e064081710a92064e296c6aae0e736c409b4cc87f1f4afe. May 15 00:31:48.242496 systemd[1]: Started cri-containerd-3436994ad96c49137ebb1d694f1954f422a2a679b98f9b4b3af143522f717165.scope - libcontainer container 3436994ad96c49137ebb1d694f1954f422a2a679b98f9b4b3af143522f717165. May 15 00:31:48.245132 systemd[1]: Started cri-containerd-8067e30d6904db55c6ec2a6c782397f246beb1e3f206afc0a4a38c2207a6da48.scope - libcontainer container 8067e30d6904db55c6ec2a6c782397f246beb1e3f206afc0a4a38c2207a6da48. May 15 00:31:48.265548 kubelet[2104]: W0515 00:31:48.265470 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:48.265548 kubelet[2104]: E0515 00:31:48.265538 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:48.270416 kubelet[2104]: W0515 00:31:48.270360 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:48.270504 kubelet[2104]: E0515 00:31:48.270416 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:48.276066 containerd[1443]: time="2025-05-15T00:31:48.276014566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0afe397fe085fcf4becb1d1b0c55c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1da768dbd4cc8f200e064081710a92064e296c6aae0e736c409b4cc87f1f4afe\"" May 15 00:31:48.277362 kubelet[2104]: E0515 00:31:48.277341 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:48.279371 containerd[1443]: time="2025-05-15T00:31:48.279334566Z" level=info msg="CreateContainer within sandbox \"1da768dbd4cc8f200e064081710a92064e296c6aae0e736c409b4cc87f1f4afe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:31:48.283063 containerd[1443]: time="2025-05-15T00:31:48.283019926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"3436994ad96c49137ebb1d694f1954f422a2a679b98f9b4b3af143522f717165\"" May 15 00:31:48.283225 containerd[1443]: time="2025-05-15T00:31:48.283133486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"8067e30d6904db55c6ec2a6c782397f246beb1e3f206afc0a4a38c2207a6da48\"" May 15 00:31:48.283804 kubelet[2104]: E0515 00:31:48.283782 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:48.284190 kubelet[2104]: E0515 00:31:48.284118 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:48.285726 containerd[1443]: time="2025-05-15T00:31:48.285694926Z" level=info msg="CreateContainer within sandbox \"3436994ad96c49137ebb1d694f1954f422a2a679b98f9b4b3af143522f717165\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:31:48.286578 containerd[1443]: time="2025-05-15T00:31:48.286545726Z" level=info msg="CreateContainer within sandbox \"8067e30d6904db55c6ec2a6c782397f246beb1e3f206afc0a4a38c2207a6da48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:31:48.295289 containerd[1443]: time="2025-05-15T00:31:48.295229646Z" level=info msg="CreateContainer within sandbox \"1da768dbd4cc8f200e064081710a92064e296c6aae0e736c409b4cc87f1f4afe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"442a20c8daed1bfb9c6529c9d6c4b7f87860319418499f89656b8f3a35d1cfcd\"" May 15 00:31:48.296094 containerd[1443]: time="2025-05-15T00:31:48.296010806Z" level=info msg="StartContainer for \"442a20c8daed1bfb9c6529c9d6c4b7f87860319418499f89656b8f3a35d1cfcd\"" May 15 00:31:48.302336 kubelet[2104]: E0515 00:31:48.302288 2104 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" May 15 00:31:48.302549 containerd[1443]: time="2025-05-15T00:31:48.302517886Z" level=info msg="CreateContainer within sandbox \"8067e30d6904db55c6ec2a6c782397f246beb1e3f206afc0a4a38c2207a6da48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0d174df85aad7f377be5566c37ea4ddc95bc77d52ffe75d9f4f3ef00ef84f35\"" May 15 00:31:48.303026 containerd[1443]: time="2025-05-15T00:31:48.303001606Z" level=info msg="StartContainer for \"a0d174df85aad7f377be5566c37ea4ddc95bc77d52ffe75d9f4f3ef00ef84f35\"" May 15 00:31:48.305305 containerd[1443]: time="2025-05-15T00:31:48.305139206Z" level=info msg="CreateContainer within sandbox \"3436994ad96c49137ebb1d694f1954f422a2a679b98f9b4b3af143522f717165\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f739b0dbad71dec2df155fc90653f4bfbc2443dfb832af84f2a4c91fbe43453\"" May 15 00:31:48.305596 containerd[1443]: time="2025-05-15T00:31:48.305573206Z" level=info msg="StartContainer for \"2f739b0dbad71dec2df155fc90653f4bfbc2443dfb832af84f2a4c91fbe43453\"" May 15 00:31:48.327423 systemd[1]: Started cri-containerd-442a20c8daed1bfb9c6529c9d6c4b7f87860319418499f89656b8f3a35d1cfcd.scope - libcontainer container 442a20c8daed1bfb9c6529c9d6c4b7f87860319418499f89656b8f3a35d1cfcd. May 15 00:31:48.331846 systemd[1]: Started cri-containerd-2f739b0dbad71dec2df155fc90653f4bfbc2443dfb832af84f2a4c91fbe43453.scope - libcontainer container 2f739b0dbad71dec2df155fc90653f4bfbc2443dfb832af84f2a4c91fbe43453. May 15 00:31:48.333438 systemd[1]: Started cri-containerd-a0d174df85aad7f377be5566c37ea4ddc95bc77d52ffe75d9f4f3ef00ef84f35.scope - libcontainer container a0d174df85aad7f377be5566c37ea4ddc95bc77d52ffe75d9f4f3ef00ef84f35. May 15 00:31:48.385342 containerd[1443]: time="2025-05-15T00:31:48.384254926Z" level=info msg="StartContainer for \"442a20c8daed1bfb9c6529c9d6c4b7f87860319418499f89656b8f3a35d1cfcd\" returns successfully" May 15 00:31:48.385342 containerd[1443]: time="2025-05-15T00:31:48.384391046Z" level=info msg="StartContainer for \"2f739b0dbad71dec2df155fc90653f4bfbc2443dfb832af84f2a4c91fbe43453\" returns successfully" May 15 00:31:48.385342 containerd[1443]: time="2025-05-15T00:31:48.384627166Z" level=info msg="StartContainer for \"a0d174df85aad7f377be5566c37ea4ddc95bc77d52ffe75d9f4f3ef00ef84f35\" returns successfully" May 15 00:31:48.385510 kubelet[2104]: W0515 00:31:48.384898 2104 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.130:6443: connect: connection refused May 15 00:31:48.385510 kubelet[2104]: E0515 00:31:48.385067 2104 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" May 15 00:31:48.566346 kubelet[2104]: I0515 00:31:48.566303 2104 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:48.566709 kubelet[2104]: E0515 00:31:48.566671 2104 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" May 15 00:31:48.928500 kubelet[2104]: E0515 00:31:48.928466 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:48.928603 kubelet[2104]: E0515 00:31:48.928591 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:48.931004 kubelet[2104]: E0515 00:31:48.930982 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:48.931124 kubelet[2104]: E0515 00:31:48.931100 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:48.932765 kubelet[2104]: E0515 00:31:48.932746 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:48.932867 kubelet[2104]: E0515 00:31:48.932850 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:49.935194 kubelet[2104]: E0515 00:31:49.935160 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:49.935503 kubelet[2104]: E0515 00:31:49.935300 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:49.935550 kubelet[2104]: E0515 00:31:49.935515 2104 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:31:49.936248 kubelet[2104]: E0515 00:31:49.935624 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:49.948594 kubelet[2104]: E0515 00:31:49.948535 2104 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:31:50.067538 kubelet[2104]: E0515 00:31:50.067398 2104 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f8bfecb280b16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:31:46.889394966 +0000 UTC m=+0.895762881,LastTimestamp:2025-05-15 00:31:46.889394966 +0000 UTC m=+0.895762881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:31:50.168199 kubelet[2104]: I0515 00:31:50.168158 2104 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:50.172209 kubelet[2104]: I0515 00:31:50.172178 2104 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:31:50.172209 kubelet[2104]: E0515 00:31:50.172206 2104 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:31:50.175192 kubelet[2104]: E0515 00:31:50.174780 2104 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:31:50.199409 kubelet[2104]: I0515 00:31:50.199330 2104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:31:50.206672 kubelet[2104]: E0515 00:31:50.206445 2104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:31:50.206672 kubelet[2104]: I0515 00:31:50.206470 2104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:31:50.207955 kubelet[2104]: E0515 00:31:50.207928 2104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 00:31:50.207955 kubelet[2104]: I0515 00:31:50.207950 2104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:31:50.209599 kubelet[2104]: E0515 00:31:50.209401 2104 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:31:50.887735 kubelet[2104]: I0515 00:31:50.887499 2104 apiserver.go:52] "Watching apiserver" May 15 00:31:50.899199 kubelet[2104]: I0515 00:31:50.899159 2104 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:31:50.935184 kubelet[2104]: I0515 00:31:50.935163 2104 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:31:50.941980 kubelet[2104]: E0515 00:31:50.941945 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:51.937030 kubelet[2104]: E0515 00:31:51.936608 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:51.959841 systemd[1]: Reloading requested from client PID 2382 ('systemctl') (unit session-7.scope)... May 15 00:31:51.959854 systemd[1]: Reloading... May 15 00:31:52.023271 zram_generator::config[2424]: No configuration found. May 15 00:31:52.101744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:31:52.176530 systemd[1]: Reloading finished in 216 ms. May 15 00:31:52.210881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:52.225133 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:31:52.225387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:52.225441 systemd[1]: kubelet.service: Consumed 1.278s CPU time, 123.2M memory peak, 0B memory swap peak. May 15 00:31:52.234772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:31:52.333454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:31:52.338812 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:31:52.376264 kubelet[2463]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:31:52.376264 kubelet[2463]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:31:52.376264 kubelet[2463]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:31:52.376264 kubelet[2463]: I0515 00:31:52.376184 2463 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:31:52.382130 kubelet[2463]: I0515 00:31:52.382094 2463 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:31:52.382130 kubelet[2463]: I0515 00:31:52.382121 2463 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:31:52.382391 kubelet[2463]: I0515 00:31:52.382365 2463 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:31:52.383555 kubelet[2463]: I0515 00:31:52.383532 2463 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:31:52.385829 kubelet[2463]: I0515 00:31:52.385800 2463 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:31:52.389853 kubelet[2463]: E0515 00:31:52.389825 2463 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:31:52.389853 kubelet[2463]: I0515 00:31:52.389853 2463 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:31:52.392312 kubelet[2463]: I0515 00:31:52.392282 2463 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:31:52.392512 kubelet[2463]: I0515 00:31:52.392478 2463 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:31:52.392650 kubelet[2463]: I0515 00:31:52.392504 2463 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:31:52.392726 kubelet[2463]: I0515 00:31:52.392657 2463 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:31:52.392726 kubelet[2463]: I0515 00:31:52.392666 2463 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:31:52.392726 kubelet[2463]: I0515 00:31:52.392707 2463 state_mem.go:36] "Initialized new in-memory state store" May 15 00:31:52.392866 kubelet[2463]: I0515 00:31:52.392825 2463 kubelet.go:446] "Attempting to sync node with API server" May 15 00:31:52.392866 kubelet[2463]: I0515 00:31:52.392839 2463 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:31:52.392866 kubelet[2463]: I0515 00:31:52.392855 2463 kubelet.go:352] "Adding apiserver pod source" May 15 00:31:52.392866 kubelet[2463]: I0515 00:31:52.392863 2463 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:31:52.394068 kubelet[2463]: I0515 00:31:52.393435 2463 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:31:52.394068 kubelet[2463]: I0515 00:31:52.393863 2463 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:31:52.394290 kubelet[2463]: I0515 00:31:52.394260 2463 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:31:52.394345 kubelet[2463]: I0515 00:31:52.394295 2463 server.go:1287] "Started kubelet" May 15 00:31:52.395484 kubelet[2463]: I0515 00:31:52.395048 2463 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:31:52.395940 kubelet[2463]: I0515 00:31:52.395909 2463 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:31:52.396115 kubelet[2463]: I0515 00:31:52.396100 2463 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:31:52.396387 kubelet[2463]: I0515 00:31:52.395565 2463 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:31:52.397795 kubelet[2463]: I0515 00:31:52.397630 2463 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:31:52.398482 kubelet[2463]: I0515 00:31:52.398457 2463 server.go:490] "Adding debug handlers to kubelet server" May 15 00:31:52.399212 kubelet[2463]: I0515 00:31:52.399172 2463 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:31:52.399387 kubelet[2463]: I0515 00:31:52.399315 2463 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:31:52.399426 kubelet[2463]: I0515 00:31:52.399418 2463 reconciler.go:26] "Reconciler: start to sync state" May 15 00:31:52.400476 kubelet[2463]: I0515 00:31:52.400149 2463 factory.go:221] Registration of the systemd container factory successfully May 15 00:31:52.400713 kubelet[2463]: E0515 00:31:52.400678 2463 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:31:52.402317 kubelet[2463]: I0515 00:31:52.401061 2463 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:31:52.402968 kubelet[2463]: E0515 00:31:52.402943 2463 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:31:52.403757 kubelet[2463]: I0515 00:31:52.403730 2463 factory.go:221] Registration of the containerd container factory successfully May 15 00:31:52.422133 kubelet[2463]: I0515 00:31:52.421691 2463 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:31:52.426134 kubelet[2463]: I0515 00:31:52.426105 2463 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:31:52.426134 kubelet[2463]: I0515 00:31:52.426132 2463 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:31:52.426230 kubelet[2463]: I0515 00:31:52.426150 2463 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:31:52.426230 kubelet[2463]: I0515 00:31:52.426157 2463 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:31:52.426230 kubelet[2463]: E0515 00:31:52.426195 2463 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:31:52.448929 kubelet[2463]: I0515 00:31:52.448905 2463 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:31:52.448929 kubelet[2463]: I0515 00:31:52.448925 2463 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:31:52.449023 kubelet[2463]: I0515 00:31:52.448944 2463 state_mem.go:36] "Initialized new in-memory state store" May 15 00:31:52.449104 kubelet[2463]: I0515 00:31:52.449086 2463 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:31:52.449137 kubelet[2463]: I0515 00:31:52.449102 2463 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:31:52.449137 kubelet[2463]: I0515 00:31:52.449120 2463 policy_none.go:49] "None policy: Start" May 15 00:31:52.449137 kubelet[2463]: I0515 00:31:52.449127 2463 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:31:52.449137 kubelet[2463]: I0515 00:31:52.449136 2463 state_mem.go:35] "Initializing new in-memory state store" May 15 00:31:52.449233 kubelet[2463]: I0515 00:31:52.449223 2463 state_mem.go:75] "Updated machine memory state" May 15 00:31:52.453325 kubelet[2463]: I0515 00:31:52.453204 2463 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:31:52.453440 kubelet[2463]: I0515 00:31:52.453367 2463 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:31:52.453440 kubelet[2463]: I0515 00:31:52.453379 2463 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:31:52.454253 kubelet[2463]: I0515 00:31:52.453667 2463 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:31:52.454394 kubelet[2463]: E0515 00:31:52.454372 2463 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:31:52.527752 kubelet[2463]: I0515 00:31:52.526798 2463 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.527752 kubelet[2463]: I0515 00:31:52.526829 2463 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:31:52.527752 kubelet[2463]: I0515 00:31:52.526872 2463 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:31:52.532959 kubelet[2463]: E0515 00:31:52.532928 2463 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:31:52.557354 kubelet[2463]: I0515 00:31:52.557323 2463 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:31:52.562801 kubelet[2463]: I0515 00:31:52.562765 2463 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 00:31:52.562880 kubelet[2463]: I0515 00:31:52.562861 2463 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:31:52.600363 kubelet[2463]: I0515 00:31:52.600333 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.600535 kubelet[2463]: I0515 00:31:52.600367 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.600535 kubelet[2463]: I0515 00:31:52.600391 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.600535 kubelet[2463]: I0515 00:31:52.600407 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.600535 kubelet[2463]: I0515 00:31:52.600461 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:52.600535 kubelet[2463]: I0515 00:31:52.600478 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:52.600701 kubelet[2463]: I0515 00:31:52.600499 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:31:52.600701 kubelet[2463]: I0515 00:31:52.600517 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:31:52.600701 kubelet[2463]: I0515 00:31:52.600532 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0afe397fe085fcf4becb1d1b0c55c5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0afe397fe085fcf4becb1d1b0c55c5f\") " pod="kube-system/kube-apiserver-localhost" May 15 00:31:52.832594 kubelet[2463]: E0515 00:31:52.832481 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:52.832594 kubelet[2463]: E0515 00:31:52.832495 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:52.833778 kubelet[2463]: E0515 00:31:52.833751 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:53.393701 kubelet[2463]: I0515 00:31:53.393653 2463 apiserver.go:52] "Watching apiserver" May 15 00:31:53.400481 kubelet[2463]: I0515 00:31:53.400437 2463 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:31:53.438274 kubelet[2463]: E0515 00:31:53.438227 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:53.438274 kubelet[2463]: I0515 00:31:53.438260 2463 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:31:53.438422 kubelet[2463]: I0515 00:31:53.438309 2463 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:31:53.465548 kubelet[2463]: E0515 00:31:53.465497 2463 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:31:53.465662 kubelet[2463]: E0515 00:31:53.465633 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:53.465980 kubelet[2463]: E0515 00:31:53.465751 2463 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:31:53.465980 kubelet[2463]: E0515 00:31:53.465864 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:53.493402 kubelet[2463]: I0515 00:31:53.490725 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.490710606 podStartE2EDuration="1.490710606s" podCreationTimestamp="2025-05-15 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:31:53.490403206 +0000 UTC m=+1.148085961" watchObservedRunningTime="2025-05-15 00:31:53.490710606 +0000 UTC m=+1.148393281" May 15 00:31:53.513686 kubelet[2463]: I0515 00:31:53.513618 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.5135984860000002 podStartE2EDuration="3.513598486s" podCreationTimestamp="2025-05-15 00:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:31:53.501105486 +0000 UTC m=+1.158788161" watchObservedRunningTime="2025-05-15 00:31:53.513598486 +0000 UTC m=+1.171281161" May 15 00:31:53.532059 kubelet[2463]: I0515 00:31:53.531992 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.531972806 podStartE2EDuration="1.531972806s" podCreationTimestamp="2025-05-15 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:31:53.514467326 +0000 UTC m=+1.172150001" watchObservedRunningTime="2025-05-15 00:31:53.531972806 +0000 UTC m=+1.189655481" May 15 00:31:54.439797 kubelet[2463]: E0515 00:31:54.439753 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:54.440452 kubelet[2463]: E0515 00:31:54.439827 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:55.440923 kubelet[2463]: E0515 00:31:55.440887 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:55.441501 kubelet[2463]: E0515 00:31:55.441458 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:56.761851 kubelet[2463]: I0515 00:31:56.761810 2463 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:31:56.762462 containerd[1443]: time="2025-05-15T00:31:56.762367121Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:31:56.762715 kubelet[2463]: I0515 00:31:56.762551 2463 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:31:57.063441 sudo[1620]: pam_unix(sudo:session): session closed for user root May 15 00:31:57.066367 sshd[1617]: pam_unix(sshd:session): session closed for user core May 15 00:31:57.070198 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. May 15 00:31:57.070426 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:56864.service: Deactivated successfully. May 15 00:31:57.072968 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:31:57.073178 systemd[1]: session-7.scope: Consumed 6.788s CPU time, 151.1M memory peak, 0B memory swap peak. May 15 00:31:57.074100 systemd-logind[1422]: Removed session 7. May 15 00:31:57.457962 systemd[1]: Created slice kubepods-besteffort-pod942579ba_f1b9_44d7_b5a6_ea7952b4f144.slice - libcontainer container kubepods-besteffort-pod942579ba_f1b9_44d7_b5a6_ea7952b4f144.slice. May 15 00:31:57.533227 kubelet[2463]: I0515 00:31:57.533082 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/942579ba-f1b9-44d7-b5a6-ea7952b4f144-lib-modules\") pod \"kube-proxy-xqnnd\" (UID: \"942579ba-f1b9-44d7-b5a6-ea7952b4f144\") " pod="kube-system/kube-proxy-xqnnd" May 15 00:31:57.533391 kubelet[2463]: I0515 00:31:57.533260 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/942579ba-f1b9-44d7-b5a6-ea7952b4f144-kube-proxy\") pod \"kube-proxy-xqnnd\" (UID: \"942579ba-f1b9-44d7-b5a6-ea7952b4f144\") " pod="kube-system/kube-proxy-xqnnd" May 15 00:31:57.533391 kubelet[2463]: I0515 00:31:57.533291 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/942579ba-f1b9-44d7-b5a6-ea7952b4f144-xtables-lock\") pod \"kube-proxy-xqnnd\" (UID: \"942579ba-f1b9-44d7-b5a6-ea7952b4f144\") " pod="kube-system/kube-proxy-xqnnd" May 15 00:31:57.533391 kubelet[2463]: I0515 00:31:57.533308 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxzj\" (UniqueName: \"kubernetes.io/projected/942579ba-f1b9-44d7-b5a6-ea7952b4f144-kube-api-access-jnxzj\") pod \"kube-proxy-xqnnd\" (UID: \"942579ba-f1b9-44d7-b5a6-ea7952b4f144\") " pod="kube-system/kube-proxy-xqnnd" May 15 00:31:57.647080 kubelet[2463]: E0515 00:31:57.647040 2463 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 00:31:57.647080 kubelet[2463]: E0515 00:31:57.647073 2463 projected.go:194] Error preparing data for projected volume kube-api-access-jnxzj for pod kube-system/kube-proxy-xqnnd: configmap "kube-root-ca.crt" not found May 15 00:31:57.647253 kubelet[2463]: E0515 00:31:57.647135 2463 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/942579ba-f1b9-44d7-b5a6-ea7952b4f144-kube-api-access-jnxzj podName:942579ba-f1b9-44d7-b5a6-ea7952b4f144 nodeName:}" failed. No retries permitted until 2025-05-15 00:31:58.147114639 +0000 UTC m=+5.804797274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jnxzj" (UniqueName: "kubernetes.io/projected/942579ba-f1b9-44d7-b5a6-ea7952b4f144-kube-api-access-jnxzj") pod "kube-proxy-xqnnd" (UID: "942579ba-f1b9-44d7-b5a6-ea7952b4f144") : configmap "kube-root-ca.crt" not found May 15 00:31:57.815297 systemd[1]: Created slice kubepods-besteffort-pod347056a3_9a7a_4a8d_b249_147fb704cb8a.slice - libcontainer container kubepods-besteffort-pod347056a3_9a7a_4a8d_b249_147fb704cb8a.slice. May 15 00:31:57.836493 kubelet[2463]: I0515 00:31:57.836462 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5dxs\" (UniqueName: \"kubernetes.io/projected/347056a3-9a7a-4a8d-b249-147fb704cb8a-kube-api-access-r5dxs\") pod \"tigera-operator-789496d6f5-vgntm\" (UID: \"347056a3-9a7a-4a8d-b249-147fb704cb8a\") " pod="tigera-operator/tigera-operator-789496d6f5-vgntm" May 15 00:31:57.836840 kubelet[2463]: I0515 00:31:57.836501 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/347056a3-9a7a-4a8d-b249-147fb704cb8a-var-lib-calico\") pod \"tigera-operator-789496d6f5-vgntm\" (UID: \"347056a3-9a7a-4a8d-b249-147fb704cb8a\") " pod="tigera-operator/tigera-operator-789496d6f5-vgntm" May 15 00:31:58.119627 containerd[1443]: time="2025-05-15T00:31:58.119503211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-vgntm,Uid:347056a3-9a7a-4a8d-b249-147fb704cb8a,Namespace:tigera-operator,Attempt:0,}" May 15 00:31:58.143630 containerd[1443]: time="2025-05-15T00:31:58.143194895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:31:58.143630 containerd[1443]: time="2025-05-15T00:31:58.143296892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:31:58.143630 containerd[1443]: time="2025-05-15T00:31:58.143308732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:58.143630 containerd[1443]: time="2025-05-15T00:31:58.143401530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:58.164439 systemd[1]: Started cri-containerd-a0c7f3408d0b8695767187b0af384b2612da66fa0a0b264884f5d42d76c9441d.scope - libcontainer container a0c7f3408d0b8695767187b0af384b2612da66fa0a0b264884f5d42d76c9441d. May 15 00:31:58.190984 containerd[1443]: time="2025-05-15T00:31:58.190893335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-vgntm,Uid:347056a3-9a7a-4a8d-b249-147fb704cb8a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a0c7f3408d0b8695767187b0af384b2612da66fa0a0b264884f5d42d76c9441d\"" May 15 00:31:58.192626 containerd[1443]: time="2025-05-15T00:31:58.192581047Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:31:58.372936 kubelet[2463]: E0515 00:31:58.371062 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:58.373030 containerd[1443]: time="2025-05-15T00:31:58.372172604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqnnd,Uid:942579ba-f1b9-44d7-b5a6-ea7952b4f144,Namespace:kube-system,Attempt:0,}" May 15 00:31:58.403659 containerd[1443]: time="2025-05-15T00:31:58.401393291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:31:58.403659 containerd[1443]: time="2025-05-15T00:31:58.401452169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:31:58.403659 containerd[1443]: time="2025-05-15T00:31:58.401470689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:58.403659 containerd[1443]: time="2025-05-15T00:31:58.401564566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:31:58.427410 systemd[1]: Started cri-containerd-28c9d93484a8762b673b2593a23e0be950b5372247f520511382c5abf894e6e7.scope - libcontainer container 28c9d93484a8762b673b2593a23e0be950b5372247f520511382c5abf894e6e7. May 15 00:31:58.453384 containerd[1443]: time="2025-05-15T00:31:58.453345249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqnnd,Uid:942579ba-f1b9-44d7-b5a6-ea7952b4f144,Namespace:kube-system,Attempt:0,} returns sandbox id \"28c9d93484a8762b673b2593a23e0be950b5372247f520511382c5abf894e6e7\"" May 15 00:31:58.454662 kubelet[2463]: E0515 00:31:58.454640 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:58.457225 containerd[1443]: time="2025-05-15T00:31:58.457183300Z" level=info msg="CreateContainer within sandbox \"28c9d93484a8762b673b2593a23e0be950b5372247f520511382c5abf894e6e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:31:58.469393 containerd[1443]: time="2025-05-15T00:31:58.469276395Z" level=info msg="CreateContainer within sandbox \"28c9d93484a8762b673b2593a23e0be950b5372247f520511382c5abf894e6e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d41c81d28313cca17c14dfef93ae62bb3f15f50b137ed891a4eb75def5cfba5\"" May 15 00:31:58.471173 containerd[1443]: time="2025-05-15T00:31:58.471145701Z" level=info msg="StartContainer for \"2d41c81d28313cca17c14dfef93ae62bb3f15f50b137ed891a4eb75def5cfba5\"" May 15 00:31:58.495418 systemd[1]: Started cri-containerd-2d41c81d28313cca17c14dfef93ae62bb3f15f50b137ed891a4eb75def5cfba5.scope - libcontainer container 2d41c81d28313cca17c14dfef93ae62bb3f15f50b137ed891a4eb75def5cfba5. May 15 00:31:58.530224 containerd[1443]: time="2025-05-15T00:31:58.530110299Z" level=info msg="StartContainer for \"2d41c81d28313cca17c14dfef93ae62bb3f15f50b137ed891a4eb75def5cfba5\" returns successfully" May 15 00:31:59.451360 kubelet[2463]: E0515 00:31:59.451321 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:31:59.463103 kubelet[2463]: I0515 00:31:59.463035 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xqnnd" podStartSLOduration=2.463017953 podStartE2EDuration="2.463017953s" podCreationTimestamp="2025-05-15 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:31:59.46161219 +0000 UTC m=+7.119294865" watchObservedRunningTime="2025-05-15 00:31:59.463017953 +0000 UTC m=+7.120700668" May 15 00:31:59.819114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058800550.mount: Deactivated successfully. May 15 00:32:00.187710 containerd[1443]: time="2025-05-15T00:32:00.187651527Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:00.188310 containerd[1443]: time="2025-05-15T00:32:00.188262952Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 00:32:00.188936 containerd[1443]: time="2025-05-15T00:32:00.188895336Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:00.191753 containerd[1443]: time="2025-05-15T00:32:00.191716105Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:00.192301 containerd[1443]: time="2025-05-15T00:32:00.192269332Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.999651286s" May 15 00:32:00.192301 containerd[1443]: time="2025-05-15T00:32:00.192297651Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 00:32:00.195875 containerd[1443]: time="2025-05-15T00:32:00.195831042Z" level=info msg="CreateContainer within sandbox \"a0c7f3408d0b8695767187b0af384b2612da66fa0a0b264884f5d42d76c9441d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:32:00.204848 containerd[1443]: time="2025-05-15T00:32:00.204802177Z" level=info msg="CreateContainer within sandbox \"a0c7f3408d0b8695767187b0af384b2612da66fa0a0b264884f5d42d76c9441d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab1968cc92169605da7f91e248a67df3e7c5e25d8af40f04e513c130c4876edf\"" May 15 00:32:00.205266 containerd[1443]: time="2025-05-15T00:32:00.205224087Z" level=info msg="StartContainer for \"ab1968cc92169605da7f91e248a67df3e7c5e25d8af40f04e513c130c4876edf\"" May 15 00:32:00.233421 systemd[1]: Started cri-containerd-ab1968cc92169605da7f91e248a67df3e7c5e25d8af40f04e513c130c4876edf.scope - libcontainer container ab1968cc92169605da7f91e248a67df3e7c5e25d8af40f04e513c130c4876edf. May 15 00:32:00.303200 containerd[1443]: time="2025-05-15T00:32:00.303139832Z" level=info msg="StartContainer for \"ab1968cc92169605da7f91e248a67df3e7c5e25d8af40f04e513c130c4876edf\" returns successfully" May 15 00:32:00.463139 kubelet[2463]: I0515 00:32:00.462331 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-vgntm" podStartSLOduration=1.459828629 podStartE2EDuration="3.462315402s" podCreationTimestamp="2025-05-15 00:31:57 +0000 UTC" firstStartedPulling="2025-05-15 00:31:58.192164819 +0000 UTC m=+5.849847494" lastFinishedPulling="2025-05-15 00:32:00.194651592 +0000 UTC m=+7.852334267" observedRunningTime="2025-05-15 00:32:00.462108487 +0000 UTC m=+8.119791242" watchObservedRunningTime="2025-05-15 00:32:00.462315402 +0000 UTC m=+8.119998077" May 15 00:32:02.195649 kubelet[2463]: E0515 00:32:02.195610 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:02.456629 kubelet[2463]: E0515 00:32:02.456406 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:03.547508 kubelet[2463]: E0515 00:32:03.547455 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:03.666447 kubelet[2463]: E0515 00:32:03.666396 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:04.464906 kubelet[2463]: E0515 00:32:04.464849 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:04.757155 systemd[1]: Created slice kubepods-besteffort-podb7a7570f_7f59_4b63_bec8_db83b1a3dd36.slice - libcontainer container kubepods-besteffort-podb7a7570f_7f59_4b63_bec8_db83b1a3dd36.slice. May 15 00:32:04.792839 kubelet[2463]: I0515 00:32:04.792735 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7a7570f-7f59-4b63-bec8-db83b1a3dd36-tigera-ca-bundle\") pod \"calico-typha-84c64d8bb4-rjqrp\" (UID: \"b7a7570f-7f59-4b63-bec8-db83b1a3dd36\") " pod="calico-system/calico-typha-84c64d8bb4-rjqrp" May 15 00:32:04.793142 kubelet[2463]: I0515 00:32:04.792865 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b7a7570f-7f59-4b63-bec8-db83b1a3dd36-typha-certs\") pod \"calico-typha-84c64d8bb4-rjqrp\" (UID: \"b7a7570f-7f59-4b63-bec8-db83b1a3dd36\") " pod="calico-system/calico-typha-84c64d8bb4-rjqrp" May 15 00:32:04.793142 kubelet[2463]: I0515 00:32:04.792893 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qx5q\" (UniqueName: \"kubernetes.io/projected/b7a7570f-7f59-4b63-bec8-db83b1a3dd36-kube-api-access-5qx5q\") pod \"calico-typha-84c64d8bb4-rjqrp\" (UID: \"b7a7570f-7f59-4b63-bec8-db83b1a3dd36\") " pod="calico-system/calico-typha-84c64d8bb4-rjqrp" May 15 00:32:04.795530 systemd[1]: Created slice kubepods-besteffort-podc6d16571_3e71_416e_a420_80298545aa0c.slice - libcontainer container kubepods-besteffort-podc6d16571_3e71_416e_a420_80298545aa0c.slice. May 15 00:32:04.894070 kubelet[2463]: I0515 00:32:04.894029 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rsnv\" (UniqueName: \"kubernetes.io/projected/c6d16571-3e71-416e-a420-80298545aa0c-kube-api-access-2rsnv\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894212 kubelet[2463]: I0515 00:32:04.894088 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6d16571-3e71-416e-a420-80298545aa0c-tigera-ca-bundle\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894212 kubelet[2463]: I0515 00:32:04.894106 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c6d16571-3e71-416e-a420-80298545aa0c-node-certs\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894212 kubelet[2463]: I0515 00:32:04.894123 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-var-lib-calico\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894212 kubelet[2463]: I0515 00:32:04.894140 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-cni-bin-dir\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894212 kubelet[2463]: I0515 00:32:04.894158 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-var-run-calico\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894363 kubelet[2463]: I0515 00:32:04.894175 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-cni-log-dir\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894363 kubelet[2463]: I0515 00:32:04.894191 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-flexvol-driver-host\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894363 kubelet[2463]: I0515 00:32:04.894218 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-lib-modules\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894363 kubelet[2463]: I0515 00:32:04.894271 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-xtables-lock\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894363 kubelet[2463]: I0515 00:32:04.894292 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-policysync\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.894470 kubelet[2463]: I0515 00:32:04.894309 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c6d16571-3e71-416e-a420-80298545aa0c-cni-net-dir\") pod \"calico-node-2gpz6\" (UID: \"c6d16571-3e71-416e-a420-80298545aa0c\") " pod="calico-system/calico-node-2gpz6" May 15 00:32:04.907649 kubelet[2463]: E0515 00:32:04.907423 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:04.995982 kubelet[2463]: I0515 00:32:04.995937 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21b8146b-053c-41d9-a1d1-bb9a962f2acc-kubelet-dir\") pod \"csi-node-driver-gzwnl\" (UID: \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\") " pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:04.995982 kubelet[2463]: I0515 00:32:04.995991 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21b8146b-053c-41d9-a1d1-bb9a962f2acc-registration-dir\") pod \"csi-node-driver-gzwnl\" (UID: \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\") " pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:04.996136 kubelet[2463]: I0515 00:32:04.996047 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21b8146b-053c-41d9-a1d1-bb9a962f2acc-socket-dir\") pod \"csi-node-driver-gzwnl\" (UID: \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\") " pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:04.996136 kubelet[2463]: I0515 00:32:04.996097 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21b8146b-053c-41d9-a1d1-bb9a962f2acc-varrun\") pod \"csi-node-driver-gzwnl\" (UID: \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\") " pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:04.996136 kubelet[2463]: I0515 00:32:04.996114 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsl8f\" (UniqueName: \"kubernetes.io/projected/21b8146b-053c-41d9-a1d1-bb9a962f2acc-kube-api-access-zsl8f\") pod \"csi-node-driver-gzwnl\" (UID: \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\") " pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:04.997059 kubelet[2463]: E0515 00:32:04.997035 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.997059 kubelet[2463]: W0515 00:32:04.997057 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.997144 kubelet[2463]: E0515 00:32:04.997123 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.997398 kubelet[2463]: E0515 00:32:04.997379 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.997398 kubelet[2463]: W0515 00:32:04.997398 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.997478 kubelet[2463]: E0515 00:32:04.997418 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.997900 kubelet[2463]: E0515 00:32:04.997863 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.997900 kubelet[2463]: W0515 00:32:04.997883 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.997900 kubelet[2463]: E0515 00:32:04.997900 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.998162 kubelet[2463]: E0515 00:32:04.998148 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.998162 kubelet[2463]: W0515 00:32:04.998160 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.998252 kubelet[2463]: E0515 00:32:04.998221 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.998889 kubelet[2463]: E0515 00:32:04.998854 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.998939 kubelet[2463]: W0515 00:32:04.998888 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.998970 kubelet[2463]: E0515 00:32:04.998955 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.999166 kubelet[2463]: E0515 00:32:04.999142 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.999166 kubelet[2463]: W0515 00:32:04.999157 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.999254 kubelet[2463]: E0515 00:32:04.999224 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.999450 kubelet[2463]: E0515 00:32:04.999409 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.999450 kubelet[2463]: W0515 00:32:04.999436 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.999547 kubelet[2463]: E0515 00:32:04.999523 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.999625 kubelet[2463]: E0515 00:32:04.999610 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.999625 kubelet[2463]: W0515 00:32:04.999622 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.999757 kubelet[2463]: E0515 00:32:04.999656 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:04.999814 kubelet[2463]: E0515 00:32:04.999800 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:04.999845 kubelet[2463]: W0515 00:32:04.999825 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:04.999845 kubelet[2463]: E0515 00:32:04.999839 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.000046 kubelet[2463]: E0515 00:32:05.000033 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.000086 kubelet[2463]: W0515 00:32:05.000044 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.000115 kubelet[2463]: E0515 00:32:05.000084 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.000347 kubelet[2463]: E0515 00:32:05.000329 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.000347 kubelet[2463]: W0515 00:32:05.000344 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.000412 kubelet[2463]: E0515 00:32:05.000358 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.000614 kubelet[2463]: E0515 00:32:05.000589 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.000614 kubelet[2463]: W0515 00:32:05.000606 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.000682 kubelet[2463]: E0515 00:32:05.000640 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.001064 kubelet[2463]: E0515 00:32:05.000930 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.001064 kubelet[2463]: W0515 00:32:05.000946 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.001064 kubelet[2463]: E0515 00:32:05.000959 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.001210 kubelet[2463]: E0515 00:32:05.001198 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.001275 kubelet[2463]: W0515 00:32:05.001264 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.001343 kubelet[2463]: E0515 00:32:05.001330 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.006662 kubelet[2463]: E0515 00:32:05.006568 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.006662 kubelet[2463]: W0515 00:32:05.006588 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.006662 kubelet[2463]: E0515 00:32:05.006601 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.009383 kubelet[2463]: E0515 00:32:05.009306 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.009383 kubelet[2463]: W0515 00:32:05.009326 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.009383 kubelet[2463]: E0515 00:32:05.009341 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.073850 kubelet[2463]: E0515 00:32:05.073725 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:05.078656 containerd[1443]: time="2025-05-15T00:32:05.078615209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c64d8bb4-rjqrp,Uid:b7a7570f-7f59-4b63-bec8-db83b1a3dd36,Namespace:calico-system,Attempt:0,}" May 15 00:32:05.097664 kubelet[2463]: E0515 00:32:05.097499 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.097664 kubelet[2463]: W0515 00:32:05.097520 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.097664 kubelet[2463]: E0515 00:32:05.097540 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.098291 kubelet[2463]: E0515 00:32:05.097768 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.098291 kubelet[2463]: W0515 00:32:05.097777 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.098291 kubelet[2463]: E0515 00:32:05.097787 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.098561 kubelet[2463]: E0515 00:32:05.098541 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.098610 kubelet[2463]: W0515 00:32:05.098557 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.098610 kubelet[2463]: E0515 00:32:05.098579 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.099279 kubelet[2463]: E0515 00:32:05.099226 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.099279 kubelet[2463]: W0515 00:32:05.099253 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.099279 kubelet[2463]: E0515 00:32:05.099272 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.099539 kubelet[2463]: E0515 00:32:05.099499 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:05.100510 containerd[1443]: time="2025-05-15T00:32:05.100014381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gpz6,Uid:c6d16571-3e71-416e-a420-80298545aa0c,Namespace:calico-system,Attempt:0,}" May 15 00:32:05.100714 kubelet[2463]: E0515 00:32:05.100667 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.100714 kubelet[2463]: W0515 00:32:05.100685 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.100909 kubelet[2463]: E0515 00:32:05.100759 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.101488 kubelet[2463]: E0515 00:32:05.101464 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.102003 kubelet[2463]: W0515 00:32:05.101881 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.102003 kubelet[2463]: E0515 00:32:05.101958 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.102189 kubelet[2463]: E0515 00:32:05.102159 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.102189 kubelet[2463]: W0515 00:32:05.102186 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.102282 kubelet[2463]: E0515 00:32:05.102233 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.102797 kubelet[2463]: E0515 00:32:05.102575 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.102797 kubelet[2463]: W0515 00:32:05.102590 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.102797 kubelet[2463]: E0515 00:32:05.102650 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.102797 kubelet[2463]: E0515 00:32:05.102777 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.102797 kubelet[2463]: W0515 00:32:05.102786 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.103360 kubelet[2463]: E0515 00:32:05.102834 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.103392 kubelet[2463]: E0515 00:32:05.103380 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.103392 kubelet[2463]: W0515 00:32:05.103389 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.103437 kubelet[2463]: E0515 00:32:05.103422 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.103605 kubelet[2463]: E0515 00:32:05.103574 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.103605 kubelet[2463]: W0515 00:32:05.103588 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.103684 kubelet[2463]: E0515 00:32:05.103611 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.103765 kubelet[2463]: E0515 00:32:05.103749 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.103765 kubelet[2463]: W0515 00:32:05.103760 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.103826 kubelet[2463]: E0515 00:32:05.103786 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.103937 kubelet[2463]: E0515 00:32:05.103918 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.103937 kubelet[2463]: W0515 00:32:05.103928 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.103999 kubelet[2463]: E0515 00:32:05.103948 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.104088 kubelet[2463]: E0515 00:32:05.104071 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.104088 kubelet[2463]: W0515 00:32:05.104080 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.104151 kubelet[2463]: E0515 00:32:05.104110 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.104417 kubelet[2463]: E0515 00:32:05.104257 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.104417 kubelet[2463]: W0515 00:32:05.104267 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.104417 kubelet[2463]: E0515 00:32:05.104280 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.104503 kubelet[2463]: E0515 00:32:05.104441 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.104503 kubelet[2463]: W0515 00:32:05.104449 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.104503 kubelet[2463]: E0515 00:32:05.104462 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.104941 kubelet[2463]: E0515 00:32:05.104684 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.104941 kubelet[2463]: W0515 00:32:05.104694 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.104941 kubelet[2463]: E0515 00:32:05.104705 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.104941 kubelet[2463]: E0515 00:32:05.104913 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.104941 kubelet[2463]: W0515 00:32:05.104922 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.104941 kubelet[2463]: E0515 00:32:05.104936 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.105165 kubelet[2463]: E0515 00:32:05.105149 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.105165 kubelet[2463]: W0515 00:32:05.105163 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.105222 kubelet[2463]: E0515 00:32:05.105212 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.105594 kubelet[2463]: E0515 00:32:05.105349 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.105594 kubelet[2463]: W0515 00:32:05.105359 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.105594 kubelet[2463]: E0515 00:32:05.105398 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.105594 kubelet[2463]: E0515 00:32:05.105496 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.105594 kubelet[2463]: W0515 00:32:05.105503 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.105594 kubelet[2463]: E0515 00:32:05.105533 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.105818 kubelet[2463]: E0515 00:32:05.105644 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.105818 kubelet[2463]: W0515 00:32:05.105653 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.105818 kubelet[2463]: E0515 00:32:05.105666 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.105897 kubelet[2463]: E0515 00:32:05.105851 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.105897 kubelet[2463]: W0515 00:32:05.105860 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.105897 kubelet[2463]: E0515 00:32:05.105869 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.106288 kubelet[2463]: E0515 00:32:05.106270 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.106288 kubelet[2463]: W0515 00:32:05.106284 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.106428 kubelet[2463]: E0515 00:32:05.106295 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.106559 kubelet[2463]: E0515 00:32:05.106527 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.106559 kubelet[2463]: W0515 00:32:05.106539 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.106559 kubelet[2463]: E0515 00:32:05.106548 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.119064 containerd[1443]: time="2025-05-15T00:32:05.118924837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:05.119064 containerd[1443]: time="2025-05-15T00:32:05.118984876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:05.119064 containerd[1443]: time="2025-05-15T00:32:05.119000356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:05.119202 containerd[1443]: time="2025-05-15T00:32:05.119083515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:05.132074 kubelet[2463]: E0515 00:32:05.132003 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:05.132074 kubelet[2463]: W0515 00:32:05.132028 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:05.132395 kubelet[2463]: E0515 00:32:05.132328 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:05.142432 systemd[1]: Started cri-containerd-50f54bc731e727c2860480a9056b5bd8da6d7d113801b1c781b84f89218b9d0f.scope - libcontainer container 50f54bc731e727c2860480a9056b5bd8da6d7d113801b1c781b84f89218b9d0f. May 15 00:32:05.189223 containerd[1443]: time="2025-05-15T00:32:05.189003285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:05.189223 containerd[1443]: time="2025-05-15T00:32:05.189055084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:05.189223 containerd[1443]: time="2025-05-15T00:32:05.189065844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:05.189223 containerd[1443]: time="2025-05-15T00:32:05.189131603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:05.203343 containerd[1443]: time="2025-05-15T00:32:05.203300306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84c64d8bb4-rjqrp,Uid:b7a7570f-7f59-4b63-bec8-db83b1a3dd36,Namespace:calico-system,Attempt:0,} returns sandbox id \"50f54bc731e727c2860480a9056b5bd8da6d7d113801b1c781b84f89218b9d0f\"" May 15 00:32:05.210793 kubelet[2463]: E0515 00:32:05.210748 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:05.215345 containerd[1443]: time="2025-05-15T00:32:05.215312648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:32:05.219439 systemd[1]: Started cri-containerd-0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c.scope - libcontainer container 0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c. May 15 00:32:05.242939 containerd[1443]: time="2025-05-15T00:32:05.242884707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gpz6,Uid:c6d16571-3e71-416e-a420-80298545aa0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\"" May 15 00:32:05.243558 kubelet[2463]: E0515 00:32:05.243537 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:06.434571 kubelet[2463]: E0515 00:32:06.434519 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:07.620186 update_engine[1425]: I20250515 00:32:07.620087 1425 update_attempter.cc:509] Updating boot flags... May 15 00:32:07.654819 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2993) May 15 00:32:07.692346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2992) May 15 00:32:07.780210 containerd[1443]: time="2025-05-15T00:32:07.779769137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:07.780810 containerd[1443]: time="2025-05-15T00:32:07.780386767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 00:32:07.781490 containerd[1443]: time="2025-05-15T00:32:07.781436990Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:07.784096 containerd[1443]: time="2025-05-15T00:32:07.784061748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:07.785745 containerd[1443]: time="2025-05-15T00:32:07.785160451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.569712325s" May 15 00:32:07.785745 containerd[1443]: time="2025-05-15T00:32:07.785698082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 00:32:07.787454 containerd[1443]: time="2025-05-15T00:32:07.787419335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:32:07.796747 containerd[1443]: time="2025-05-15T00:32:07.796436911Z" level=info msg="CreateContainer within sandbox \"50f54bc731e727c2860480a9056b5bd8da6d7d113801b1c781b84f89218b9d0f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:32:07.811454 containerd[1443]: time="2025-05-15T00:32:07.811394872Z" level=info msg="CreateContainer within sandbox \"50f54bc731e727c2860480a9056b5bd8da6d7d113801b1c781b84f89218b9d0f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13d908f742a7e1fd589e1955d2d306ce3c76d02e82e79f624dba7589c4781504\"" May 15 00:32:07.812437 containerd[1443]: time="2025-05-15T00:32:07.812407216Z" level=info msg="StartContainer for \"13d908f742a7e1fd589e1955d2d306ce3c76d02e82e79f624dba7589c4781504\"" May 15 00:32:07.842486 systemd[1]: Started cri-containerd-13d908f742a7e1fd589e1955d2d306ce3c76d02e82e79f624dba7589c4781504.scope - libcontainer container 13d908f742a7e1fd589e1955d2d306ce3c76d02e82e79f624dba7589c4781504. May 15 00:32:07.921454 containerd[1443]: time="2025-05-15T00:32:07.921342998Z" level=info msg="StartContainer for \"13d908f742a7e1fd589e1955d2d306ce3c76d02e82e79f624dba7589c4781504\" returns successfully" May 15 00:32:08.427539 kubelet[2463]: E0515 00:32:08.427446 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:08.477255 kubelet[2463]: E0515 00:32:08.477215 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:08.503486 kubelet[2463]: E0515 00:32:08.503391 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.504354 kubelet[2463]: W0515 00:32:08.504050 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.504354 kubelet[2463]: E0515 00:32:08.504080 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.504354 kubelet[2463]: I0515 00:32:08.504073 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84c64d8bb4-rjqrp" podStartSLOduration=1.931855556 podStartE2EDuration="4.503714886s" podCreationTimestamp="2025-05-15 00:32:04 +0000 UTC" firstStartedPulling="2025-05-15 00:32:05.215033293 +0000 UTC m=+12.872715968" lastFinishedPulling="2025-05-15 00:32:07.786892623 +0000 UTC m=+15.444575298" observedRunningTime="2025-05-15 00:32:08.503221694 +0000 UTC m=+16.160904369" watchObservedRunningTime="2025-05-15 00:32:08.503714886 +0000 UTC m=+16.161397561" May 15 00:32:08.504760 kubelet[2463]: E0515 00:32:08.504640 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.504760 kubelet[2463]: W0515 00:32:08.504668 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.504760 kubelet[2463]: E0515 00:32:08.504713 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.505728 kubelet[2463]: E0515 00:32:08.505636 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.505728 kubelet[2463]: W0515 00:32:08.505663 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.505728 kubelet[2463]: E0515 00:32:08.505676 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.506128 kubelet[2463]: E0515 00:32:08.506052 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.506128 kubelet[2463]: W0515 00:32:08.506065 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.506128 kubelet[2463]: E0515 00:32:08.506076 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.506569 kubelet[2463]: E0515 00:32:08.506553 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.507045 kubelet[2463]: W0515 00:32:08.506924 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.507045 kubelet[2463]: E0515 00:32:08.506949 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.507330 kubelet[2463]: E0515 00:32:08.507204 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.507330 kubelet[2463]: W0515 00:32:08.507218 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.507330 kubelet[2463]: E0515 00:32:08.507228 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.507529 kubelet[2463]: E0515 00:32:08.507515 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.507581 kubelet[2463]: W0515 00:32:08.507570 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.507661 kubelet[2463]: E0515 00:32:08.507636 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.508024 kubelet[2463]: E0515 00:32:08.508008 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.508343 kubelet[2463]: W0515 00:32:08.508104 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.508343 kubelet[2463]: E0515 00:32:08.508123 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.508476 kubelet[2463]: E0515 00:32:08.508456 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.508505 kubelet[2463]: W0515 00:32:08.508474 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.508526 kubelet[2463]: E0515 00:32:08.508489 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.508721 kubelet[2463]: E0515 00:32:08.508690 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.508721 kubelet[2463]: W0515 00:32:08.508707 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.508721 kubelet[2463]: E0515 00:32:08.508717 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.508915 kubelet[2463]: E0515 00:32:08.508894 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.508915 kubelet[2463]: W0515 00:32:08.508908 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.508966 kubelet[2463]: E0515 00:32:08.508918 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.509087 kubelet[2463]: E0515 00:32:08.509075 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.509117 kubelet[2463]: W0515 00:32:08.509086 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.509117 kubelet[2463]: E0515 00:32:08.509095 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.509539 kubelet[2463]: E0515 00:32:08.509364 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.509539 kubelet[2463]: W0515 00:32:08.509377 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.509539 kubelet[2463]: E0515 00:32:08.509386 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.509644 kubelet[2463]: E0515 00:32:08.509548 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.509644 kubelet[2463]: W0515 00:32:08.509557 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.509644 kubelet[2463]: E0515 00:32:08.509565 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.509747 kubelet[2463]: E0515 00:32:08.509722 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.509747 kubelet[2463]: W0515 00:32:08.509737 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.509747 kubelet[2463]: E0515 00:32:08.509747 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.523227 kubelet[2463]: E0515 00:32:08.523195 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.523227 kubelet[2463]: W0515 00:32:08.523216 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.523227 kubelet[2463]: E0515 00:32:08.523230 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.523499 kubelet[2463]: E0515 00:32:08.523475 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.523499 kubelet[2463]: W0515 00:32:08.523489 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.523569 kubelet[2463]: E0515 00:32:08.523503 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.523752 kubelet[2463]: E0515 00:32:08.523702 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.523752 kubelet[2463]: W0515 00:32:08.523717 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.523752 kubelet[2463]: E0515 00:32:08.523730 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.523927 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525694 kubelet[2463]: W0515 00:32:08.523941 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.523954 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.524178 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525694 kubelet[2463]: W0515 00:32:08.524188 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.524198 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.524372 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525694 kubelet[2463]: W0515 00:32:08.524381 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.524389 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525694 kubelet[2463]: E0515 00:32:08.524579 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525939 kubelet[2463]: W0515 00:32:08.524587 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.524658 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.524889 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525939 kubelet[2463]: W0515 00:32:08.524897 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.524976 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.525056 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525939 kubelet[2463]: W0515 00:32:08.525063 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.525130 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.525939 kubelet[2463]: E0515 00:32:08.525252 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.525939 kubelet[2463]: W0515 00:32:08.525261 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525270 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525454 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526138 kubelet[2463]: W0515 00:32:08.525463 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525474 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525629 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526138 kubelet[2463]: W0515 00:32:08.525637 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525652 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525876 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526138 kubelet[2463]: W0515 00:32:08.525885 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526138 kubelet[2463]: E0515 00:32:08.525896 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526352 kubelet[2463]: E0515 00:32:08.526188 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526352 kubelet[2463]: W0515 00:32:08.526197 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526352 kubelet[2463]: E0515 00:32:08.526208 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526408 kubelet[2463]: E0515 00:32:08.526396 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526408 kubelet[2463]: W0515 00:32:08.526405 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526451 kubelet[2463]: E0515 00:32:08.526414 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526598 kubelet[2463]: E0515 00:32:08.526570 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526598 kubelet[2463]: W0515 00:32:08.526582 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526598 kubelet[2463]: E0515 00:32:08.526591 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.526780 kubelet[2463]: E0515 00:32:08.526753 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.526780 kubelet[2463]: W0515 00:32:08.526766 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.526780 kubelet[2463]: E0515 00:32:08.526775 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.527333 kubelet[2463]: E0515 00:32:08.527303 2463 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:32:08.527333 kubelet[2463]: W0515 00:32:08.527321 2463 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:32:08.527413 kubelet[2463]: E0515 00:32:08.527351 2463 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:32:08.859158 containerd[1443]: time="2025-05-15T00:32:08.858752375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:08.860058 containerd[1443]: time="2025-05-15T00:32:08.859820919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 00:32:08.862467 containerd[1443]: time="2025-05-15T00:32:08.862415241Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:08.864760 containerd[1443]: time="2025-05-15T00:32:08.864712326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:08.865350 containerd[1443]: time="2025-05-15T00:32:08.865314357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.077857143s" May 15 00:32:08.865401 containerd[1443]: time="2025-05-15T00:32:08.865358557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 00:32:08.867171 containerd[1443]: time="2025-05-15T00:32:08.867145050Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:32:08.878204 containerd[1443]: time="2025-05-15T00:32:08.878170605Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148\"" May 15 00:32:08.879593 containerd[1443]: time="2025-05-15T00:32:08.879557984Z" level=info msg="StartContainer for \"c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148\"" May 15 00:32:08.913468 systemd[1]: Started cri-containerd-c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148.scope - libcontainer container c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148. May 15 00:32:08.946040 containerd[1443]: time="2025-05-15T00:32:08.944402854Z" level=info msg="StartContainer for \"c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148\" returns successfully" May 15 00:32:08.973496 systemd[1]: cri-containerd-c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148.scope: Deactivated successfully. May 15 00:32:09.011935 containerd[1443]: time="2025-05-15T00:32:09.000762291Z" level=info msg="shim disconnected" id=c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148 namespace=k8s.io May 15 00:32:09.011935 containerd[1443]: time="2025-05-15T00:32:09.011933894Z" level=warning msg="cleaning up after shim disconnected" id=c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148 namespace=k8s.io May 15 00:32:09.012150 containerd[1443]: time="2025-05-15T00:32:09.011950934Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:32:09.479659 kubelet[2463]: I0515 00:32:09.479610 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:09.480123 kubelet[2463]: E0515 00:32:09.479889 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:09.480123 kubelet[2463]: E0515 00:32:09.479901 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:09.480665 containerd[1443]: time="2025-05-15T00:32:09.480624761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:32:09.791364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5653821de3e88310afcecea8d63e1a7ac93300850d6585c70a5184db5b39148-rootfs.mount: Deactivated successfully. May 15 00:32:10.429398 kubelet[2463]: E0515 00:32:10.429360 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:12.429489 kubelet[2463]: E0515 00:32:12.429452 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:13.898155 containerd[1443]: time="2025-05-15T00:32:13.898106876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:13.899074 containerd[1443]: time="2025-05-15T00:32:13.898605470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 00:32:13.899841 containerd[1443]: time="2025-05-15T00:32:13.899546460Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:13.902358 containerd[1443]: time="2025-05-15T00:32:13.902314110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:13.903843 containerd[1443]: time="2025-05-15T00:32:13.903808934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.423137813s" May 15 00:32:13.903900 containerd[1443]: time="2025-05-15T00:32:13.903852933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 00:32:13.906845 containerd[1443]: time="2025-05-15T00:32:13.906816621Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:32:13.930412 containerd[1443]: time="2025-05-15T00:32:13.930368046Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f\"" May 15 00:32:13.930882 containerd[1443]: time="2025-05-15T00:32:13.930716562Z" level=info msg="StartContainer for \"5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f\"" May 15 00:32:13.961430 systemd[1]: Started cri-containerd-5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f.scope - libcontainer container 5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f. May 15 00:32:13.991825 containerd[1443]: time="2025-05-15T00:32:13.991692262Z" level=info msg="StartContainer for \"5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f\" returns successfully" May 15 00:32:14.427199 kubelet[2463]: E0515 00:32:14.427089 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:14.481319 systemd[1]: cri-containerd-5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f.scope: Deactivated successfully. May 15 00:32:14.491194 kubelet[2463]: E0515 00:32:14.491159 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:14.507922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f-rootfs.mount: Deactivated successfully. May 15 00:32:14.543682 kubelet[2463]: I0515 00:32:14.543398 2463 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 00:32:14.638908 systemd[1]: Created slice kubepods-besteffort-podf13eb429_1c6c_4728_8d7b_b418b49a379b.slice - libcontainer container kubepods-besteffort-podf13eb429_1c6c_4728_8d7b_b418b49a379b.slice. May 15 00:32:14.643845 systemd[1]: Created slice kubepods-besteffort-poda015a20f_6c2f_4521_a3f8_af0d5074817c.slice - libcontainer container kubepods-besteffort-poda015a20f_6c2f_4521_a3f8_af0d5074817c.slice. May 15 00:32:14.663175 containerd[1443]: time="2025-05-15T00:32:14.662771961Z" level=info msg="shim disconnected" id=5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f namespace=k8s.io May 15 00:32:14.663811 containerd[1443]: time="2025-05-15T00:32:14.663189236Z" level=warning msg="cleaning up after shim disconnected" id=5dd925cb57d650aef14dd4de3b96bf002d4d0d9e075edd99c2ac7c75fe7a4e4f namespace=k8s.io May 15 00:32:14.663811 containerd[1443]: time="2025-05-15T00:32:14.663647392Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:32:14.667449 kubelet[2463]: I0515 00:32:14.667387 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbbf\" (UniqueName: \"kubernetes.io/projected/133536c7-a00d-44e3-b80c-3429e5cc650f-kube-api-access-hvbbf\") pod \"calico-apiserver-7848fb646-fthrc\" (UID: \"133536c7-a00d-44e3-b80c-3429e5cc650f\") " pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" May 15 00:32:14.667581 kubelet[2463]: I0515 00:32:14.667452 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqgm5\" (UniqueName: \"kubernetes.io/projected/1f81d208-cbad-47fe-a5a2-d5e9a5c74af4-kube-api-access-tqgm5\") pod \"coredns-668d6bf9bc-kmp6l\" (UID: \"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4\") " pod="kube-system/coredns-668d6bf9bc-kmp6l" May 15 00:32:14.667581 kubelet[2463]: I0515 00:32:14.667512 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmn9\" (UniqueName: \"kubernetes.io/projected/d06094c1-9261-4b11-9614-1030ad3afe7f-kube-api-access-7bmn9\") pod \"coredns-668d6bf9bc-j968j\" (UID: \"d06094c1-9261-4b11-9614-1030ad3afe7f\") " pod="kube-system/coredns-668d6bf9bc-j968j" May 15 00:32:14.667581 kubelet[2463]: I0515 00:32:14.667537 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f81d208-cbad-47fe-a5a2-d5e9a5c74af4-config-volume\") pod \"coredns-668d6bf9bc-kmp6l\" (UID: \"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4\") " pod="kube-system/coredns-668d6bf9bc-kmp6l" May 15 00:32:14.667581 kubelet[2463]: I0515 00:32:14.667569 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx826\" (UniqueName: \"kubernetes.io/projected/f13eb429-1c6c-4728-8d7b-b418b49a379b-kube-api-access-zx826\") pod \"calico-apiserver-7848fb646-krrw9\" (UID: \"f13eb429-1c6c-4728-8d7b-b418b49a379b\") " pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" May 15 00:32:14.667718 kubelet[2463]: I0515 00:32:14.667592 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d06094c1-9261-4b11-9614-1030ad3afe7f-config-volume\") pod \"coredns-668d6bf9bc-j968j\" (UID: \"d06094c1-9261-4b11-9614-1030ad3afe7f\") " pod="kube-system/coredns-668d6bf9bc-j968j" May 15 00:32:14.667718 kubelet[2463]: I0515 00:32:14.667615 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/133536c7-a00d-44e3-b80c-3429e5cc650f-calico-apiserver-certs\") pod \"calico-apiserver-7848fb646-fthrc\" (UID: \"133536c7-a00d-44e3-b80c-3429e5cc650f\") " pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" May 15 00:32:14.667718 kubelet[2463]: I0515 00:32:14.667630 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f13eb429-1c6c-4728-8d7b-b418b49a379b-calico-apiserver-certs\") pod \"calico-apiserver-7848fb646-krrw9\" (UID: \"f13eb429-1c6c-4728-8d7b-b418b49a379b\") " pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" May 15 00:32:14.667718 kubelet[2463]: I0515 00:32:14.667652 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a015a20f-6c2f-4521-a3f8-af0d5074817c-tigera-ca-bundle\") pod \"calico-kube-controllers-6bd5fd459d-c9qd4\" (UID: \"a015a20f-6c2f-4521-a3f8-af0d5074817c\") " pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" May 15 00:32:14.667718 kubelet[2463]: I0515 00:32:14.667678 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzdf\" (UniqueName: \"kubernetes.io/projected/a015a20f-6c2f-4521-a3f8-af0d5074817c-kube-api-access-gkzdf\") pod \"calico-kube-controllers-6bd5fd459d-c9qd4\" (UID: \"a015a20f-6c2f-4521-a3f8-af0d5074817c\") " pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" May 15 00:32:14.680492 systemd[1]: Created slice kubepods-besteffort-pod133536c7_a00d_44e3_b80c_3429e5cc650f.slice - libcontainer container kubepods-besteffort-pod133536c7_a00d_44e3_b80c_3429e5cc650f.slice. May 15 00:32:14.689308 systemd[1]: Created slice kubepods-burstable-podd06094c1_9261_4b11_9614_1030ad3afe7f.slice - libcontainer container kubepods-burstable-podd06094c1_9261_4b11_9614_1030ad3afe7f.slice. May 15 00:32:14.696825 systemd[1]: Created slice kubepods-burstable-pod1f81d208_cbad_47fe_a5a2_d5e9a5c74af4.slice - libcontainer container kubepods-burstable-pod1f81d208_cbad_47fe_a5a2_d5e9a5c74af4.slice. May 15 00:32:14.944127 containerd[1443]: time="2025-05-15T00:32:14.944006424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-krrw9,Uid:f13eb429-1c6c-4728-8d7b-b418b49a379b,Namespace:calico-apiserver,Attempt:0,}" May 15 00:32:14.949089 containerd[1443]: time="2025-05-15T00:32:14.949041853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd5fd459d-c9qd4,Uid:a015a20f-6c2f-4521-a3f8-af0d5074817c,Namespace:calico-system,Attempt:0,}" May 15 00:32:14.987526 containerd[1443]: time="2025-05-15T00:32:14.987480783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-fthrc,Uid:133536c7-a00d-44e3-b80c-3429e5cc650f,Namespace:calico-apiserver,Attempt:0,}" May 15 00:32:14.995962 kubelet[2463]: E0515 00:32:14.995556 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:15.000218 containerd[1443]: time="2025-05-15T00:32:15.000183214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j968j,Uid:d06094c1-9261-4b11-9614-1030ad3afe7f,Namespace:kube-system,Attempt:0,}" May 15 00:32:15.008987 kubelet[2463]: E0515 00:32:15.002084 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:15.009511 containerd[1443]: time="2025-05-15T00:32:15.009350565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmp6l,Uid:1f81d208-cbad-47fe-a5a2-d5e9a5c74af4,Namespace:kube-system,Attempt:0,}" May 15 00:32:15.381547 containerd[1443]: time="2025-05-15T00:32:15.381391663Z" level=error msg="Failed to destroy network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.382343 containerd[1443]: time="2025-05-15T00:32:15.382296054Z" level=error msg="Failed to destroy network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.382973 containerd[1443]: time="2025-05-15T00:32:15.382939328Z" level=error msg="encountered an error cleaning up failed sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.383796 containerd[1443]: time="2025-05-15T00:32:15.383742641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-fthrc,Uid:133536c7-a00d-44e3-b80c-3429e5cc650f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.383911 containerd[1443]: time="2025-05-15T00:32:15.382967928Z" level=error msg="encountered an error cleaning up failed sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.383911 containerd[1443]: time="2025-05-15T00:32:15.383844360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd5fd459d-c9qd4,Uid:a015a20f-6c2f-4521-a3f8-af0d5074817c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.385888 kubelet[2463]: E0515 00:32:15.385830 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.385994 kubelet[2463]: E0515 00:32:15.385919 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" May 15 00:32:15.385994 kubelet[2463]: E0515 00:32:15.385940 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" May 15 00:32:15.386056 kubelet[2463]: E0515 00:32:15.385991 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7848fb646-fthrc_calico-apiserver(133536c7-a00d-44e3-b80c-3429e5cc650f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7848fb646-fthrc_calico-apiserver(133536c7-a00d-44e3-b80c-3429e5cc650f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" podUID="133536c7-a00d-44e3-b80c-3429e5cc650f" May 15 00:32:15.386234 kubelet[2463]: E0515 00:32:15.386201 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.386406 kubelet[2463]: E0515 00:32:15.386350 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" May 15 00:32:15.386516 kubelet[2463]: E0515 00:32:15.386499 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" May 15 00:32:15.386639 kubelet[2463]: E0515 00:32:15.386615 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bd5fd459d-c9qd4_calico-system(a015a20f-6c2f-4521-a3f8-af0d5074817c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bd5fd459d-c9qd4_calico-system(a015a20f-6c2f-4521-a3f8-af0d5074817c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" podUID="a015a20f-6c2f-4521-a3f8-af0d5074817c" May 15 00:32:15.387132 containerd[1443]: time="2025-05-15T00:32:15.387091769Z" level=error msg="Failed to destroy network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.387583 containerd[1443]: time="2025-05-15T00:32:15.387489565Z" level=error msg="encountered an error cleaning up failed sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.387583 containerd[1443]: time="2025-05-15T00:32:15.387564684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j968j,Uid:d06094c1-9261-4b11-9614-1030ad3afe7f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.387810 kubelet[2463]: E0515 00:32:15.387769 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.387862 kubelet[2463]: E0515 00:32:15.387817 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j968j" May 15 00:32:15.387862 kubelet[2463]: E0515 00:32:15.387837 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j968j" May 15 00:32:15.388071 kubelet[2463]: E0515 00:32:15.387868 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j968j_kube-system(d06094c1-9261-4b11-9614-1030ad3afe7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j968j_kube-system(d06094c1-9261-4b11-9614-1030ad3afe7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j968j" podUID="d06094c1-9261-4b11-9614-1030ad3afe7f" May 15 00:32:15.395119 containerd[1443]: time="2025-05-15T00:32:15.394944334Z" level=error msg="Failed to destroy network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.395926 containerd[1443]: time="2025-05-15T00:32:15.395733206Z" level=error msg="encountered an error cleaning up failed sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.395926 containerd[1443]: time="2025-05-15T00:32:15.395798206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-krrw9,Uid:f13eb429-1c6c-4728-8d7b-b418b49a379b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.396219 kubelet[2463]: E0515 00:32:15.396185 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.397510 kubelet[2463]: E0515 00:32:15.397379 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" May 15 00:32:15.397510 kubelet[2463]: E0515 00:32:15.397417 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" May 15 00:32:15.398169 kubelet[2463]: E0515 00:32:15.397515 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7848fb646-krrw9_calico-apiserver(f13eb429-1c6c-4728-8d7b-b418b49a379b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7848fb646-krrw9_calico-apiserver(f13eb429-1c6c-4728-8d7b-b418b49a379b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" podUID="f13eb429-1c6c-4728-8d7b-b418b49a379b" May 15 00:32:15.403455 containerd[1443]: time="2025-05-15T00:32:15.403415773Z" level=error msg="Failed to destroy network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.403767 containerd[1443]: time="2025-05-15T00:32:15.403728330Z" level=error msg="encountered an error cleaning up failed sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.403818 containerd[1443]: time="2025-05-15T00:32:15.403779050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmp6l,Uid:1f81d208-cbad-47fe-a5a2-d5e9a5c74af4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.404029 kubelet[2463]: E0515 00:32:15.403963 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.404068 kubelet[2463]: E0515 00:32:15.404040 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kmp6l" May 15 00:32:15.404068 kubelet[2463]: E0515 00:32:15.404058 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kmp6l" May 15 00:32:15.404123 kubelet[2463]: E0515 00:32:15.404098 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kmp6l_kube-system(1f81d208-cbad-47fe-a5a2-d5e9a5c74af4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kmp6l_kube-system(1f81d208-cbad-47fe-a5a2-d5e9a5c74af4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kmp6l" podUID="1f81d208-cbad-47fe-a5a2-d5e9a5c74af4" May 15 00:32:15.493954 kubelet[2463]: I0515 00:32:15.493913 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:15.494794 containerd[1443]: time="2025-05-15T00:32:15.494750184Z" level=info msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" May 15 00:32:15.495583 containerd[1443]: time="2025-05-15T00:32:15.494915702Z" level=info msg="Ensure that sandbox 40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093 in task-service has been cleanup successfully" May 15 00:32:15.496800 kubelet[2463]: I0515 00:32:15.496746 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:15.497824 containerd[1443]: time="2025-05-15T00:32:15.497390199Z" level=info msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" May 15 00:32:15.497824 containerd[1443]: time="2025-05-15T00:32:15.497607077Z" level=info msg="Ensure that sandbox 366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda in task-service has been cleanup successfully" May 15 00:32:15.500270 kubelet[2463]: I0515 00:32:15.500188 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:15.501417 containerd[1443]: time="2025-05-15T00:32:15.501382401Z" level=info msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" May 15 00:32:15.501945 containerd[1443]: time="2025-05-15T00:32:15.501547879Z" level=info msg="Ensure that sandbox f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae in task-service has been cleanup successfully" May 15 00:32:15.505200 kubelet[2463]: E0515 00:32:15.504761 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:15.507155 kubelet[2463]: I0515 00:32:15.506915 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:15.507805 containerd[1443]: time="2025-05-15T00:32:15.507696420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:32:15.508117 containerd[1443]: time="2025-05-15T00:32:15.508082857Z" level=info msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" May 15 00:32:15.508374 containerd[1443]: time="2025-05-15T00:32:15.508284335Z" level=info msg="Ensure that sandbox 4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa in task-service has been cleanup successfully" May 15 00:32:15.512720 kubelet[2463]: I0515 00:32:15.512607 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:15.513158 containerd[1443]: time="2025-05-15T00:32:15.513080809Z" level=info msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" May 15 00:32:15.513412 containerd[1443]: time="2025-05-15T00:32:15.513387846Z" level=info msg="Ensure that sandbox 320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155 in task-service has been cleanup successfully" May 15 00:32:15.545558 containerd[1443]: time="2025-05-15T00:32:15.545496461Z" level=error msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" failed" error="failed to destroy network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.546142 kubelet[2463]: E0515 00:32:15.545873 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:15.546142 kubelet[2463]: E0515 00:32:15.545941 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093"} May 15 00:32:15.546142 kubelet[2463]: E0515 00:32:15.546046 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a015a20f-6c2f-4521-a3f8-af0d5074817c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:15.546142 kubelet[2463]: E0515 00:32:15.546072 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a015a20f-6c2f-4521-a3f8-af0d5074817c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" podUID="a015a20f-6c2f-4521-a3f8-af0d5074817c" May 15 00:32:15.553700 containerd[1443]: time="2025-05-15T00:32:15.553506704Z" level=error msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" failed" error="failed to destroy network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.553821 kubelet[2463]: E0515 00:32:15.553789 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:15.553874 kubelet[2463]: E0515 00:32:15.553839 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae"} May 15 00:32:15.553920 kubelet[2463]: E0515 00:32:15.553871 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f13eb429-1c6c-4728-8d7b-b418b49a379b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:15.553920 kubelet[2463]: E0515 00:32:15.553893 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f13eb429-1c6c-4728-8d7b-b418b49a379b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" podUID="f13eb429-1c6c-4728-8d7b-b418b49a379b" May 15 00:32:15.554479 containerd[1443]: time="2025-05-15T00:32:15.554435175Z" level=error msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" failed" error="failed to destroy network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.554648 kubelet[2463]: E0515 00:32:15.554613 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:15.554698 kubelet[2463]: E0515 00:32:15.554681 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda"} May 15 00:32:15.554741 kubelet[2463]: E0515 00:32:15.554708 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"133536c7-a00d-44e3-b80c-3429e5cc650f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:15.554741 kubelet[2463]: E0515 00:32:15.554730 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"133536c7-a00d-44e3-b80c-3429e5cc650f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" podUID="133536c7-a00d-44e3-b80c-3429e5cc650f" May 15 00:32:15.561939 containerd[1443]: time="2025-05-15T00:32:15.561880825Z" level=error msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" failed" error="failed to destroy network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.562135 kubelet[2463]: E0515 00:32:15.562103 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:15.562190 kubelet[2463]: E0515 00:32:15.562145 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155"} May 15 00:32:15.562190 kubelet[2463]: E0515 00:32:15.562170 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d06094c1-9261-4b11-9614-1030ad3afe7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:15.562301 kubelet[2463]: E0515 00:32:15.562189 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d06094c1-9261-4b11-9614-1030ad3afe7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j968j" podUID="d06094c1-9261-4b11-9614-1030ad3afe7f" May 15 00:32:15.564409 containerd[1443]: time="2025-05-15T00:32:15.564372801Z" level=error msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" failed" error="failed to destroy network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:15.564603 kubelet[2463]: E0515 00:32:15.564573 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:15.564668 kubelet[2463]: E0515 00:32:15.564615 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa"} May 15 00:32:15.564668 kubelet[2463]: E0515 00:32:15.564641 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:15.564668 kubelet[2463]: E0515 00:32:15.564661 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kmp6l" podUID="1f81d208-cbad-47fe-a5a2-d5e9a5c74af4" May 15 00:32:15.916498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda-shm.mount: Deactivated successfully. May 15 00:32:15.916596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093-shm.mount: Deactivated successfully. May 15 00:32:15.916644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae-shm.mount: Deactivated successfully. May 15 00:32:16.433770 systemd[1]: Created slice kubepods-besteffort-pod21b8146b_053c_41d9_a1d1_bb9a962f2acc.slice - libcontainer container kubepods-besteffort-pod21b8146b_053c_41d9_a1d1_bb9a962f2acc.slice. May 15 00:32:16.435628 containerd[1443]: time="2025-05-15T00:32:16.435594845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzwnl,Uid:21b8146b-053c-41d9-a1d1-bb9a962f2acc,Namespace:calico-system,Attempt:0,}" May 15 00:32:16.485934 containerd[1443]: time="2025-05-15T00:32:16.485873676Z" level=error msg="Failed to destroy network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:16.486204 containerd[1443]: time="2025-05-15T00:32:16.486170713Z" level=error msg="encountered an error cleaning up failed sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:16.486264 containerd[1443]: time="2025-05-15T00:32:16.486226873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzwnl,Uid:21b8146b-053c-41d9-a1d1-bb9a962f2acc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:16.487067 kubelet[2463]: E0515 00:32:16.486422 2463 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:16.487067 kubelet[2463]: E0515 00:32:16.486473 2463 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:16.487067 kubelet[2463]: E0515 00:32:16.486493 2463 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gzwnl" May 15 00:32:16.487187 kubelet[2463]: E0515 00:32:16.486540 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gzwnl_calico-system(21b8146b-053c-41d9-a1d1-bb9a962f2acc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gzwnl_calico-system(21b8146b-053c-41d9-a1d1-bb9a962f2acc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:16.488484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21-shm.mount: Deactivated successfully. May 15 00:32:16.515328 kubelet[2463]: I0515 00:32:16.515291 2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:16.519292 containerd[1443]: time="2025-05-15T00:32:16.518811862Z" level=info msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" May 15 00:32:16.519292 containerd[1443]: time="2025-05-15T00:32:16.518983620Z" level=info msg="Ensure that sandbox 4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21 in task-service has been cleanup successfully" May 15 00:32:16.544568 containerd[1443]: time="2025-05-15T00:32:16.544507432Z" level=error msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" failed" error="failed to destroy network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:32:16.544919 kubelet[2463]: E0515 00:32:16.544885 2463 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:16.545079 kubelet[2463]: E0515 00:32:16.545056 2463 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21"} May 15 00:32:16.545207 kubelet[2463]: E0515 00:32:16.545176 2463 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:32:16.545483 kubelet[2463]: E0515 00:32:16.545457 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21b8146b-053c-41d9-a1d1-bb9a962f2acc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gzwnl" podUID="21b8146b-053c-41d9-a1d1-bb9a962f2acc" May 15 00:32:19.442826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189574852.mount: Deactivated successfully. May 15 00:32:19.755328 containerd[1443]: time="2025-05-15T00:32:19.755198438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:19.757101 containerd[1443]: time="2025-05-15T00:32:19.757065264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 00:32:19.758155 containerd[1443]: time="2025-05-15T00:32:19.758117977Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:19.761920 containerd[1443]: time="2025-05-15T00:32:19.761881589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:19.763202 containerd[1443]: time="2025-05-15T00:32:19.763162459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.255162882s" May 15 00:32:19.763202 containerd[1443]: time="2025-05-15T00:32:19.763198099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 00:32:19.769713 containerd[1443]: time="2025-05-15T00:32:19.769669372Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:32:19.789141 containerd[1443]: time="2025-05-15T00:32:19.789088269Z" level=info msg="CreateContainer within sandbox \"0a3b77f3a0f7a76e3b07f8d0e6c552459b21d4f0c3d0894b231554b87d07364c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4795b5a866e8858decaeedde0aefc5faefb4d50158c6ac7c63c3211a5e084af5\"" May 15 00:32:19.789614 containerd[1443]: time="2025-05-15T00:32:19.789591185Z" level=info msg="StartContainer for \"4795b5a866e8858decaeedde0aefc5faefb4d50158c6ac7c63c3211a5e084af5\"" May 15 00:32:19.847419 systemd[1]: Started cri-containerd-4795b5a866e8858decaeedde0aefc5faefb4d50158c6ac7c63c3211a5e084af5.scope - libcontainer container 4795b5a866e8858decaeedde0aefc5faefb4d50158c6ac7c63c3211a5e084af5. May 15 00:32:19.914903 containerd[1443]: time="2025-05-15T00:32:19.914836264Z" level=info msg="StartContainer for \"4795b5a866e8858decaeedde0aefc5faefb4d50158c6ac7c63c3211a5e084af5\" returns successfully" May 15 00:32:20.034424 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:32:20.034576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:32:20.530151 kubelet[2463]: E0515 00:32:20.525368 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:21.527144 kubelet[2463]: E0515 00:32:21.527105 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:22.662094 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:59680.service - OpenSSH per-connection server daemon (10.0.0.1:59680). May 15 00:32:22.705142 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 59680 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:22.706830 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:22.711544 systemd-logind[1422]: New session 8 of user core. May 15 00:32:22.719413 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:32:22.868738 sshd[3826]: pam_unix(sshd:session): session closed for user core May 15 00:32:22.875574 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. May 15 00:32:22.875732 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:59680.service: Deactivated successfully. May 15 00:32:22.877364 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:32:22.878928 systemd-logind[1422]: Removed session 8. May 15 00:32:26.428685 containerd[1443]: time="2025-05-15T00:32:26.428497488Z" level=info msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" May 15 00:32:26.528265 kubelet[2463]: I0515 00:32:26.528035 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2gpz6" podStartSLOduration=8.008302251 podStartE2EDuration="22.527965462s" podCreationTimestamp="2025-05-15 00:32:04 +0000 UTC" firstStartedPulling="2025-05-15 00:32:05.244176564 +0000 UTC m=+12.901859239" lastFinishedPulling="2025-05-15 00:32:19.763839775 +0000 UTC m=+27.421522450" observedRunningTime="2025-05-15 00:32:20.539808595 +0000 UTC m=+28.197491270" watchObservedRunningTime="2025-05-15 00:32:26.527965462 +0000 UTC m=+34.185648137" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.527 [INFO][3934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.528 [INFO][3934] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" iface="eth0" netns="/var/run/netns/cni-20a6d024-9483-60fc-a0fd-506dd98fbbb6" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.529 [INFO][3934] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" iface="eth0" netns="/var/run/netns/cni-20a6d024-9483-60fc-a0fd-506dd98fbbb6" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.529 [INFO][3934] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" iface="eth0" netns="/var/run/netns/cni-20a6d024-9483-60fc-a0fd-506dd98fbbb6" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.529 [INFO][3934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.529 [INFO][3934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.613 [INFO][3943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.613 [INFO][3943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.613 [INFO][3943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.625 [WARNING][3943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.625 [INFO][3943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.626 [INFO][3943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:26.629700 containerd[1443]: 2025-05-15 00:32:26.628 [INFO][3934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:26.632099 systemd[1]: run-netns-cni\x2d20a6d024\x2d9483\x2d60fc\x2da0fd\x2d506dd98fbbb6.mount: Deactivated successfully. May 15 00:32:26.632705 containerd[1443]: time="2025-05-15T00:32:26.632101494Z" level=info msg="TearDown network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" successfully" May 15 00:32:26.632705 containerd[1443]: time="2025-05-15T00:32:26.632188254Z" level=info msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" returns successfully" May 15 00:32:26.633571 containerd[1443]: time="2025-05-15T00:32:26.633540168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-fthrc,Uid:133536c7-a00d-44e3-b80c-3429e5cc650f,Namespace:calico-apiserver,Attempt:1,}" May 15 00:32:26.805175 systemd-networkd[1362]: cali9f29b7b31db: Link UP May 15 00:32:26.805930 systemd-networkd[1362]: cali9f29b7b31db: Gained carrier May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.684 [INFO][3965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.698 [INFO][3965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0 calico-apiserver-7848fb646- calico-apiserver 133536c7-a00d-44e3-b80c-3429e5cc650f 839 0 2025-05-15 00:32:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7848fb646 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7848fb646-fthrc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f29b7b31db [] []}} ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.699 [INFO][3965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.736 [INFO][3988] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" HandleID="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.762 [INFO][3988] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" HandleID="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003045b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7848fb646-fthrc", "timestamp":"2025-05-15 00:32:26.736375606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.762 [INFO][3988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.762 [INFO][3988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.762 [INFO][3988] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.767 [INFO][3988] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.775 [INFO][3988] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.782 [INFO][3988] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.784 [INFO][3988] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.786 [INFO][3988] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.786 [INFO][3988] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.788 [INFO][3988] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.792 [INFO][3988] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.796 [INFO][3988] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.796 [INFO][3988] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" host="localhost" May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.796 [INFO][3988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:26.819933 containerd[1443]: 2025-05-15 00:32:26.796 [INFO][3988] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" HandleID="k8s-pod-network.df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.799 [INFO][3965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"133536c7-a00d-44e3-b80c-3429e5cc650f", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7848fb646-fthrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f29b7b31db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.799 [INFO][3965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.799 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f29b7b31db ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.805 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.805 [INFO][3965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"133536c7-a00d-44e3-b80c-3429e5cc650f", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f", Pod:"calico-apiserver-7848fb646-fthrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f29b7b31db", MAC:"8e:9d:db:2a:a2:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:26.820547 containerd[1443]: 2025-05-15 00:32:26.817 [INFO][3965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-fthrc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:26.835968 containerd[1443]: time="2025-05-15T00:32:26.835540502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:26.835968 containerd[1443]: time="2025-05-15T00:32:26.835950300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:26.835968 containerd[1443]: time="2025-05-15T00:32:26.835963100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:26.836323 containerd[1443]: time="2025-05-15T00:32:26.836043660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:26.867442 systemd[1]: Started cri-containerd-df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f.scope - libcontainer container df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f. May 15 00:32:26.880757 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:26.903227 containerd[1443]: time="2025-05-15T00:32:26.903172945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-fthrc,Uid:133536c7-a00d-44e3-b80c-3429e5cc650f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f\"" May 15 00:32:26.905344 containerd[1443]: time="2025-05-15T00:32:26.905311735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:32:27.428214 containerd[1443]: time="2025-05-15T00:32:27.428162733Z" level=info msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" iface="eth0" netns="/var/run/netns/cni-a15a4bbf-14e6-f1a8-f6ea-1966233572bb" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" iface="eth0" netns="/var/run/netns/cni-a15a4bbf-14e6-f1a8-f6ea-1966233572bb" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" iface="eth0" netns="/var/run/netns/cni-a15a4bbf-14e6-f1a8-f6ea-1966233572bb" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.477 [INFO][4069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.496 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.496 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.496 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.505 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.505 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.507 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:27.511400 containerd[1443]: 2025-05-15 00:32:27.509 [INFO][4069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:27.512039 containerd[1443]: time="2025-05-15T00:32:27.511520247Z" level=info msg="TearDown network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" successfully" May 15 00:32:27.512039 containerd[1443]: time="2025-05-15T00:32:27.511547727Z" level=info msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" returns successfully" May 15 00:32:27.512120 kubelet[2463]: E0515 00:32:27.511958 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:27.512457 containerd[1443]: time="2025-05-15T00:32:27.512431723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmp6l,Uid:1f81d208-cbad-47fe-a5a2-d5e9a5c74af4,Namespace:kube-system,Attempt:1,}" May 15 00:32:27.632867 systemd[1]: run-containerd-runc-k8s.io-df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f-runc.BKK0RU.mount: Deactivated successfully. May 15 00:32:27.633251 systemd[1]: run-netns-cni\x2da15a4bbf\x2d14e6\x2df1a8\x2df6ea\x2d1966233572bb.mount: Deactivated successfully. May 15 00:32:27.656873 systemd-networkd[1362]: cali099bf4d3248: Link UP May 15 00:32:27.657026 systemd-networkd[1362]: cali099bf4d3248: Gained carrier May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.568 [INFO][4086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.582 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0 coredns-668d6bf9bc- kube-system 1f81d208-cbad-47fe-a5a2-d5e9a5c74af4 856 0 2025-05-15 00:31:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kmp6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali099bf4d3248 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.582 [INFO][4086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.611 [INFO][4100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" HandleID="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.623 [INFO][4100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" HandleID="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137f50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kmp6l", "timestamp":"2025-05-15 00:32:27.611380449 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.623 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.623 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.623 [INFO][4100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.626 [INFO][4100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.631 [INFO][4100] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.636 [INFO][4100] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.638 [INFO][4100] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.640 [INFO][4100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.640 [INFO][4100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.641 [INFO][4100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54 May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.646 [INFO][4100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.653 [INFO][4100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.653 [INFO][4100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" host="localhost" May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.653 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:27.669869 containerd[1443]: 2025-05-15 00:32:27.653 [INFO][4100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" HandleID="k8s-pod-network.781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.655 [INFO][4086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kmp6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali099bf4d3248", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.655 [INFO][4086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.655 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali099bf4d3248 ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.657 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.657 [INFO][4086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54", Pod:"coredns-668d6bf9bc-kmp6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali099bf4d3248", MAC:"c6:e6:50:94:bc:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:27.670495 containerd[1443]: 2025-05-15 00:32:27.666 [INFO][4086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54" Namespace="kube-system" Pod="coredns-668d6bf9bc-kmp6l" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:27.690447 containerd[1443]: time="2025-05-15T00:32:27.689689425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:27.690447 containerd[1443]: time="2025-05-15T00:32:27.690231103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:27.690447 containerd[1443]: time="2025-05-15T00:32:27.690301822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:27.690693 containerd[1443]: time="2025-05-15T00:32:27.690453782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:27.715911 systemd[1]: Started cri-containerd-781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54.scope - libcontainer container 781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54. May 15 00:32:27.728690 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:27.752863 containerd[1443]: time="2025-05-15T00:32:27.750994996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kmp6l,Uid:1f81d208-cbad-47fe-a5a2-d5e9a5c74af4,Namespace:kube-system,Attempt:1,} returns sandbox id \"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54\"" May 15 00:32:27.754095 kubelet[2463]: E0515 00:32:27.754024 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:27.761136 containerd[1443]: time="2025-05-15T00:32:27.761094152Z" level=info msg="CreateContainer within sandbox \"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:32:27.786255 containerd[1443]: time="2025-05-15T00:32:27.786083962Z" level=info msg="CreateContainer within sandbox \"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24c2a651c37ed59fde4253a0ecd5cceb46d11fd277d753e7c406cbd49b6002d2\"" May 15 00:32:27.786490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749260228.mount: Deactivated successfully. May 15 00:32:27.787039 containerd[1443]: time="2025-05-15T00:32:27.786897278Z" level=info msg="StartContainer for \"24c2a651c37ed59fde4253a0ecd5cceb46d11fd277d753e7c406cbd49b6002d2\"" May 15 00:32:27.822440 systemd[1]: Started cri-containerd-24c2a651c37ed59fde4253a0ecd5cceb46d11fd277d753e7c406cbd49b6002d2.scope - libcontainer container 24c2a651c37ed59fde4253a0ecd5cceb46d11fd277d753e7c406cbd49b6002d2. May 15 00:32:27.850434 containerd[1443]: time="2025-05-15T00:32:27.850379320Z" level=info msg="StartContainer for \"24c2a651c37ed59fde4253a0ecd5cceb46d11fd277d753e7c406cbd49b6002d2\" returns successfully" May 15 00:32:27.889575 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:59684.service - OpenSSH per-connection server daemon (10.0.0.1:59684). May 15 00:32:27.931739 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 59684 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:27.932935 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:27.941378 systemd-logind[1422]: New session 9 of user core. May 15 00:32:27.949803 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:32:28.097379 systemd-networkd[1362]: cali9f29b7b31db: Gained IPv6LL May 15 00:32:28.139793 sshd[4220]: pam_unix(sshd:session): session closed for user core May 15 00:32:28.143430 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:59684.service: Deactivated successfully. May 15 00:32:28.147626 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:32:28.148280 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. May 15 00:32:28.149269 systemd-logind[1422]: Removed session 9. May 15 00:32:28.429272 containerd[1443]: time="2025-05-15T00:32:28.428600620Z" level=info msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.481 [INFO][4262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.482 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" iface="eth0" netns="/var/run/netns/cni-d8fce341-7534-8eb0-a00a-07559b3110ae" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.482 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" iface="eth0" netns="/var/run/netns/cni-d8fce341-7534-8eb0-a00a-07559b3110ae" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.482 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" iface="eth0" netns="/var/run/netns/cni-d8fce341-7534-8eb0-a00a-07559b3110ae" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.482 [INFO][4262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.482 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.506 [INFO][4270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.506 [INFO][4270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.506 [INFO][4270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.517 [WARNING][4270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.517 [INFO][4270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.518 [INFO][4270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:28.522133 containerd[1443]: 2025-05-15 00:32:28.520 [INFO][4262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:28.522767 containerd[1443]: time="2025-05-15T00:32:28.522165795Z" level=info msg="TearDown network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" successfully" May 15 00:32:28.522767 containerd[1443]: time="2025-05-15T00:32:28.522193714Z" level=info msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" returns successfully" May 15 00:32:28.523589 containerd[1443]: time="2025-05-15T00:32:28.522846152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd5fd459d-c9qd4,Uid:a015a20f-6c2f-4521-a3f8-af0d5074817c,Namespace:calico-system,Attempt:1,}" May 15 00:32:28.548046 kubelet[2463]: E0515 00:32:28.547955 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:28.579782 kubelet[2463]: I0515 00:32:28.579719 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kmp6l" podStartSLOduration=31.579700998 podStartE2EDuration="31.579700998s" podCreationTimestamp="2025-05-15 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:32:28.561811751 +0000 UTC m=+36.219494386" watchObservedRunningTime="2025-05-15 00:32:28.579700998 +0000 UTC m=+36.237383673" May 15 00:32:28.634144 systemd[1]: run-netns-cni\x2dd8fce341\x2d7534\x2d8eb0\x2da00a\x2d07559b3110ae.mount: Deactivated successfully. May 15 00:32:28.682106 systemd-networkd[1362]: califb4e30888c5: Link UP May 15 00:32:28.684433 systemd-networkd[1362]: califb4e30888c5: Gained carrier May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.563 [INFO][4279] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.596 [INFO][4279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0 calico-kube-controllers-6bd5fd459d- calico-system a015a20f-6c2f-4521-a3f8-af0d5074817c 874 0 2025-05-15 00:32:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bd5fd459d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bd5fd459d-c9qd4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb4e30888c5 [] []}} ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.597 [INFO][4279] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.626 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" HandleID="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.644 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" HandleID="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391c60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bd5fd459d-c9qd4", "timestamp":"2025-05-15 00:32:28.626271646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.644 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.644 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.644 [INFO][4296] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.646 [INFO][4296] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.650 [INFO][4296] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.654 [INFO][4296] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.657 [INFO][4296] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.660 [INFO][4296] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.660 [INFO][4296] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.662 [INFO][4296] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5 May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.666 [INFO][4296] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.672 [INFO][4296] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.673 [INFO][4296] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" host="localhost" May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.673 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:28.694446 containerd[1443]: 2025-05-15 00:32:28.673 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" HandleID="k8s-pod-network.e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.679 [INFO][4279] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0", GenerateName:"calico-kube-controllers-6bd5fd459d-", Namespace:"calico-system", SelfLink:"", UID:"a015a20f-6c2f-4521-a3f8-af0d5074817c", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd5fd459d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bd5fd459d-c9qd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb4e30888c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.679 [INFO][4279] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.679 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb4e30888c5 ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.682 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.682 [INFO][4279] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0", GenerateName:"calico-kube-controllers-6bd5fd459d-", Namespace:"calico-system", SelfLink:"", UID:"a015a20f-6c2f-4521-a3f8-af0d5074817c", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd5fd459d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5", Pod:"calico-kube-controllers-6bd5fd459d-c9qd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb4e30888c5", MAC:"4a:18:fd:0f:9f:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:28.694993 containerd[1443]: 2025-05-15 00:32:28.692 [INFO][4279] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5" Namespace="calico-system" Pod="calico-kube-controllers-6bd5fd459d-c9qd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:28.736658 containerd[1443]: time="2025-05-15T00:32:28.733625085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:28.736658 containerd[1443]: time="2025-05-15T00:32:28.733735124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:28.736658 containerd[1443]: time="2025-05-15T00:32:28.733764124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:28.736658 containerd[1443]: time="2025-05-15T00:32:28.733853324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:28.772413 systemd[1]: Started cri-containerd-e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5.scope - libcontainer container e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5. May 15 00:32:28.786740 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:28.809610 containerd[1443]: time="2025-05-15T00:32:28.809504012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bd5fd459d-c9qd4,Uid:a015a20f-6c2f-4521-a3f8-af0d5074817c,Namespace:calico-system,Attempt:1,} returns sandbox id \"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5\"" May 15 00:32:28.998769 containerd[1443]: time="2025-05-15T00:32:28.998653594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:28.999465 containerd[1443]: time="2025-05-15T00:32:28.999373351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 00:32:29.000309 containerd[1443]: time="2025-05-15T00:32:29.000274907Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:29.003231 containerd[1443]: time="2025-05-15T00:32:29.003189976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:29.003743 containerd[1443]: time="2025-05-15T00:32:29.003706134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.098353999s" May 15 00:32:29.003743 containerd[1443]: time="2025-05-15T00:32:29.003739014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:32:29.005472 containerd[1443]: time="2025-05-15T00:32:29.005282808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:32:29.006311 containerd[1443]: time="2025-05-15T00:32:29.006282804Z" level=info msg="CreateContainer within sandbox \"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:32:29.015023 containerd[1443]: time="2025-05-15T00:32:29.014950651Z" level=info msg="CreateContainer within sandbox \"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70af3c15d3d35bc79872df8d25d7b84aa2f6b782c6dfbfe20113d2163e44859e\"" May 15 00:32:29.016617 containerd[1443]: time="2025-05-15T00:32:29.016414725Z" level=info msg="StartContainer for \"70af3c15d3d35bc79872df8d25d7b84aa2f6b782c6dfbfe20113d2163e44859e\"" May 15 00:32:29.047458 systemd[1]: Started cri-containerd-70af3c15d3d35bc79872df8d25d7b84aa2f6b782c6dfbfe20113d2163e44859e.scope - libcontainer container 70af3c15d3d35bc79872df8d25d7b84aa2f6b782c6dfbfe20113d2163e44859e. May 15 00:32:29.113308 containerd[1443]: time="2025-05-15T00:32:29.113217672Z" level=info msg="StartContainer for \"70af3c15d3d35bc79872df8d25d7b84aa2f6b782c6dfbfe20113d2163e44859e\" returns successfully" May 15 00:32:29.313428 systemd-networkd[1362]: cali099bf4d3248: Gained IPv6LL May 15 00:32:29.557376 kubelet[2463]: E0515 00:32:29.557032 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:29.571852 kubelet[2463]: I0515 00:32:29.571712 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7848fb646-fthrc" podStartSLOduration=22.471632511 podStartE2EDuration="24.571691183s" podCreationTimestamp="2025-05-15 00:32:05 +0000 UTC" firstStartedPulling="2025-05-15 00:32:26.904488619 +0000 UTC m=+34.562171254" lastFinishedPulling="2025-05-15 00:32:29.004547291 +0000 UTC m=+36.662229926" observedRunningTime="2025-05-15 00:32:29.569394592 +0000 UTC m=+37.227077267" watchObservedRunningTime="2025-05-15 00:32:29.571691183 +0000 UTC m=+37.229373858" May 15 00:32:29.763175 systemd-networkd[1362]: califb4e30888c5: Gained IPv6LL May 15 00:32:30.430303 containerd[1443]: time="2025-05-15T00:32:30.427550385Z" level=info msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" May 15 00:32:30.430303 containerd[1443]: time="2025-05-15T00:32:30.428532181Z" level=info msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" May 15 00:32:30.559040 kubelet[2463]: E0515 00:32:30.559006 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:30.560040 kubelet[2463]: I0515 00:32:30.560012 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.511 [INFO][4490] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.511 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" iface="eth0" netns="/var/run/netns/cni-978518c3-4c86-c40a-3dfb-d0125908443a" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.511 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" iface="eth0" netns="/var/run/netns/cni-978518c3-4c86-c40a-3dfb-d0125908443a" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.512 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" iface="eth0" netns="/var/run/netns/cni-978518c3-4c86-c40a-3dfb-d0125908443a" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.512 [INFO][4490] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.512 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.543 [INFO][4507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.544 [INFO][4507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.544 [INFO][4507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.555 [WARNING][4507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.555 [INFO][4507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.557 [INFO][4507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:30.567483 containerd[1443]: 2025-05-15 00:32:30.560 [INFO][4490] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:30.568522 containerd[1443]: time="2025-05-15T00:32:30.568218476Z" level=info msg="TearDown network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" successfully" May 15 00:32:30.568522 containerd[1443]: time="2025-05-15T00:32:30.568490355Z" level=info msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" returns successfully" May 15 00:32:30.569519 containerd[1443]: time="2025-05-15T00:32:30.569472472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-krrw9,Uid:f13eb429-1c6c-4728-8d7b-b418b49a379b,Namespace:calico-apiserver,Attempt:1,}" May 15 00:32:30.570629 systemd[1]: run-netns-cni\x2d978518c3\x2d4c86\x2dc40a\x2d3dfb\x2dd0125908443a.mount: Deactivated successfully. May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.523 [INFO][4491] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.523 [INFO][4491] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" iface="eth0" netns="/var/run/netns/cni-aaf8ee65-8ea0-4204-bb3e-cdabb0713737" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.523 [INFO][4491] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" iface="eth0" netns="/var/run/netns/cni-aaf8ee65-8ea0-4204-bb3e-cdabb0713737" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.524 [INFO][4491] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" iface="eth0" netns="/var/run/netns/cni-aaf8ee65-8ea0-4204-bb3e-cdabb0713737" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.524 [INFO][4491] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.524 [INFO][4491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.549 [INFO][4514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.549 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.561 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.582 [WARNING][4514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.582 [INFO][4514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.584 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:30.592672 containerd[1443]: 2025-05-15 00:32:30.587 [INFO][4491] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:30.594377 containerd[1443]: time="2025-05-15T00:32:30.594038743Z" level=info msg="TearDown network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" successfully" May 15 00:32:30.594377 containerd[1443]: time="2025-05-15T00:32:30.594069863Z" level=info msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" returns successfully" May 15 00:32:30.595268 kubelet[2463]: E0515 00:32:30.594755 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:30.596522 containerd[1443]: time="2025-05-15T00:32:30.596167935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j968j,Uid:d06094c1-9261-4b11-9614-1030ad3afe7f,Namespace:kube-system,Attempt:1,}" May 15 00:32:30.596682 systemd[1]: run-netns-cni\x2daaf8ee65\x2d8ea0\x2d4204\x2dbb3e\x2dcdabb0713737.mount: Deactivated successfully. May 15 00:32:30.766547 systemd-networkd[1362]: cali352a74a91c3: Link UP May 15 00:32:30.767089 systemd-networkd[1362]: cali352a74a91c3: Gained carrier May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.642 [INFO][4537] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.675 [INFO][4537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--j968j-eth0 coredns-668d6bf9bc- kube-system d06094c1-9261-4b11-9614-1030ad3afe7f 908 0 2025-05-15 00:31:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-j968j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali352a74a91c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.675 [INFO][4537] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.707 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" HandleID="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.725 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" HandleID="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e0e60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-j968j", "timestamp":"2025-05-15 00:32:30.707599412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.725 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.726 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.726 [INFO][4560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.729 [INFO][4560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.736 [INFO][4560] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.742 [INFO][4560] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.744 [INFO][4560] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.747 [INFO][4560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.747 [INFO][4560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.749 [INFO][4560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0 May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.752 [INFO][4560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" host="localhost" May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:30.783391 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" HandleID="k8s-pod-network.5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.764 [INFO][4537] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j968j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d06094c1-9261-4b11-9614-1030ad3afe7f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-j968j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a74a91c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.764 [INFO][4537] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.764 [INFO][4537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali352a74a91c3 ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.767 [INFO][4537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.768 [INFO][4537] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j968j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d06094c1-9261-4b11-9614-1030ad3afe7f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0", Pod:"coredns-668d6bf9bc-j968j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a74a91c3", MAC:"e2:cc:ed:e6:e1:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:30.783975 containerd[1443]: 2025-05-15 00:32:30.780 [INFO][4537] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-j968j" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:30.805169 containerd[1443]: time="2025-05-15T00:32:30.805064540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:30.805169 containerd[1443]: time="2025-05-15T00:32:30.805139699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:30.806838 containerd[1443]: time="2025-05-15T00:32:30.805579378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:30.806838 containerd[1443]: time="2025-05-15T00:32:30.805675537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:30.845468 systemd[1]: Started cri-containerd-5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0.scope - libcontainer container 5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0. May 15 00:32:30.859958 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:30.878121 systemd-networkd[1362]: cali5570e9389f8: Link UP May 15 00:32:30.880139 systemd-networkd[1362]: cali5570e9389f8: Gained carrier May 15 00:32:30.901867 containerd[1443]: time="2025-05-15T00:32:30.901780070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j968j,Uid:d06094c1-9261-4b11-9614-1030ad3afe7f,Namespace:kube-system,Attempt:1,} returns sandbox id \"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0\"" May 15 00:32:30.903509 kubelet[2463]: E0515 00:32:30.903402 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.634 [INFO][4524] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.671 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0 calico-apiserver-7848fb646- calico-apiserver f13eb429-1c6c-4728-8d7b-b418b49a379b 907 0 2025-05-15 00:32:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7848fb646 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7848fb646-krrw9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5570e9389f8 [] []}} ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.672 [INFO][4524] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.713 [INFO][4554] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" HandleID="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.726 [INFO][4554] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" HandleID="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000390fc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7848fb646-krrw9", "timestamp":"2025-05-15 00:32:30.713403831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.727 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.760 [INFO][4554] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.830 [INFO][4554] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.839 [INFO][4554] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.844 [INFO][4554] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.847 [INFO][4554] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.850 [INFO][4554] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.850 [INFO][4554] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.853 [INFO][4554] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22 May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.864 [INFO][4554] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.870 [INFO][4554] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.870 [INFO][4554] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" host="localhost" May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.870 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:30.906996 containerd[1443]: 2025-05-15 00:32:30.870 [INFO][4554] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" HandleID="k8s-pod-network.1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.874 [INFO][4524] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"f13eb429-1c6c-4728-8d7b-b418b49a379b", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7848fb646-krrw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5570e9389f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.874 [INFO][4524] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.874 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5570e9389f8 ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.880 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.882 [INFO][4524] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"f13eb429-1c6c-4728-8d7b-b418b49a379b", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22", Pod:"calico-apiserver-7848fb646-krrw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5570e9389f8", MAC:"7a:f6:54:15:f2:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:30.907932 containerd[1443]: 2025-05-15 00:32:30.899 [INFO][4524] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22" Namespace="calico-apiserver" Pod="calico-apiserver-7848fb646-krrw9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:30.907932 containerd[1443]: time="2025-05-15T00:32:30.907567329Z" level=info msg="CreateContainer within sandbox \"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:32:30.943314 containerd[1443]: time="2025-05-15T00:32:30.943159280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:30.943314 containerd[1443]: time="2025-05-15T00:32:30.943228800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:30.943314 containerd[1443]: time="2025-05-15T00:32:30.943292640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:30.944939 containerd[1443]: time="2025-05-15T00:32:30.943396679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:30.989142 systemd[1]: Started cri-containerd-1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22.scope - libcontainer container 1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22. May 15 00:32:31.006368 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:31.010799 containerd[1443]: time="2025-05-15T00:32:31.010748358Z" level=info msg="CreateContainer within sandbox \"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09ffed7d67592b31bec0a272855c534da3f15519fd1a36a590fac991314f89f1\"" May 15 00:32:31.011389 containerd[1443]: time="2025-05-15T00:32:31.011354516Z" level=info msg="StartContainer for \"09ffed7d67592b31bec0a272855c534da3f15519fd1a36a590fac991314f89f1\"" May 15 00:32:31.033772 containerd[1443]: time="2025-05-15T00:32:31.033649361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7848fb646-krrw9,Uid:f13eb429-1c6c-4728-8d7b-b418b49a379b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22\"" May 15 00:32:31.040712 containerd[1443]: time="2025-05-15T00:32:31.040661817Z" level=info msg="CreateContainer within sandbox \"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:32:31.044413 systemd[1]: Started cri-containerd-09ffed7d67592b31bec0a272855c534da3f15519fd1a36a590fac991314f89f1.scope - libcontainer container 09ffed7d67592b31bec0a272855c534da3f15519fd1a36a590fac991314f89f1. May 15 00:32:31.048932 containerd[1443]: time="2025-05-15T00:32:31.048895109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:31.050785 containerd[1443]: time="2025-05-15T00:32:31.050702823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 00:32:31.053317 containerd[1443]: time="2025-05-15T00:32:31.053215334Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:31.057412 containerd[1443]: time="2025-05-15T00:32:31.057343960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:31.058824 containerd[1443]: time="2025-05-15T00:32:31.058782635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 2.053461987s" May 15 00:32:31.058869 containerd[1443]: time="2025-05-15T00:32:31.058824835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 00:32:31.069168 containerd[1443]: time="2025-05-15T00:32:31.069099000Z" level=info msg="CreateContainer within sandbox \"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:32:31.070212 containerd[1443]: time="2025-05-15T00:32:31.070164077Z" level=info msg="CreateContainer within sandbox \"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"602825c61582d0f9abc94c473768436b1049681955020e7bf456c8ef2e661cde\"" May 15 00:32:31.070929 containerd[1443]: time="2025-05-15T00:32:31.070780955Z" level=info msg="StartContainer for \"602825c61582d0f9abc94c473768436b1049681955020e7bf456c8ef2e661cde\"" May 15 00:32:31.079190 containerd[1443]: time="2025-05-15T00:32:31.079149966Z" level=info msg="CreateContainer within sandbox \"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"864a8490261e385aa37586862fac02a7f9df13e55a63c14e6ef0ed2a46feb095\"" May 15 00:32:31.082024 containerd[1443]: time="2025-05-15T00:32:31.081988437Z" level=info msg="StartContainer for \"09ffed7d67592b31bec0a272855c534da3f15519fd1a36a590fac991314f89f1\" returns successfully" May 15 00:32:31.082735 containerd[1443]: time="2025-05-15T00:32:31.082534115Z" level=info msg="StartContainer for \"864a8490261e385aa37586862fac02a7f9df13e55a63c14e6ef0ed2a46feb095\"" May 15 00:32:31.103432 systemd[1]: Started cri-containerd-602825c61582d0f9abc94c473768436b1049681955020e7bf456c8ef2e661cde.scope - libcontainer container 602825c61582d0f9abc94c473768436b1049681955020e7bf456c8ef2e661cde. May 15 00:32:31.117417 systemd[1]: Started cri-containerd-864a8490261e385aa37586862fac02a7f9df13e55a63c14e6ef0ed2a46feb095.scope - libcontainer container 864a8490261e385aa37586862fac02a7f9df13e55a63c14e6ef0ed2a46feb095. May 15 00:32:31.153119 containerd[1443]: time="2025-05-15T00:32:31.153006836Z" level=info msg="StartContainer for \"602825c61582d0f9abc94c473768436b1049681955020e7bf456c8ef2e661cde\" returns successfully" May 15 00:32:31.160453 containerd[1443]: time="2025-05-15T00:32:31.160295691Z" level=info msg="StartContainer for \"864a8490261e385aa37586862fac02a7f9df13e55a63c14e6ef0ed2a46feb095\" returns successfully" May 15 00:32:31.427826 containerd[1443]: time="2025-05-15T00:32:31.427783504Z" level=info msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.479 [INFO][4836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.479 [INFO][4836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" iface="eth0" netns="/var/run/netns/cni-c3491b54-fd5b-eea2-1fb4-db7681b5a952" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.480 [INFO][4836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" iface="eth0" netns="/var/run/netns/cni-c3491b54-fd5b-eea2-1fb4-db7681b5a952" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.481 [INFO][4836] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" iface="eth0" netns="/var/run/netns/cni-c3491b54-fd5b-eea2-1fb4-db7681b5a952" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.481 [INFO][4836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.481 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.507 [INFO][4846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.507 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.507 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.516 [WARNING][4846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.516 [INFO][4846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.517 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:31.521136 containerd[1443]: 2025-05-15 00:32:31.519 [INFO][4836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:31.521939 containerd[1443]: time="2025-05-15T00:32:31.521368747Z" level=info msg="TearDown network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" successfully" May 15 00:32:31.521939 containerd[1443]: time="2025-05-15T00:32:31.521403307Z" level=info msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" returns successfully" May 15 00:32:31.522570 containerd[1443]: time="2025-05-15T00:32:31.522537543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzwnl,Uid:21b8146b-053c-41d9-a1d1-bb9a962f2acc,Namespace:calico-system,Attempt:1,}" May 15 00:32:31.565796 kubelet[2463]: E0515 00:32:31.565397 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:31.577768 kubelet[2463]: I0515 00:32:31.575968 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7848fb646-krrw9" podStartSLOduration=26.575951282 podStartE2EDuration="26.575951282s" podCreationTimestamp="2025-05-15 00:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:32:31.574638926 +0000 UTC m=+39.232321601" watchObservedRunningTime="2025-05-15 00:32:31.575951282 +0000 UTC m=+39.233633957" May 15 00:32:31.600761 kubelet[2463]: I0515 00:32:31.599969 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j968j" podStartSLOduration=34.599952521 podStartE2EDuration="34.599952521s" podCreationTimestamp="2025-05-15 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:32:31.598686245 +0000 UTC m=+39.256368920" watchObservedRunningTime="2025-05-15 00:32:31.599952521 +0000 UTC m=+39.257635156" May 15 00:32:31.604331 kubelet[2463]: I0515 00:32:31.602131 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bd5fd459d-c9qd4" podStartSLOduration=25.352222131 podStartE2EDuration="27.602120273s" podCreationTimestamp="2025-05-15 00:32:04 +0000 UTC" firstStartedPulling="2025-05-15 00:32:28.810734567 +0000 UTC m=+36.468417242" lastFinishedPulling="2025-05-15 00:32:31.060632709 +0000 UTC m=+38.718315384" observedRunningTime="2025-05-15 00:32:31.588994998 +0000 UTC m=+39.246677673" watchObservedRunningTime="2025-05-15 00:32:31.602120273 +0000 UTC m=+39.259802988" May 15 00:32:31.638649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798095865.mount: Deactivated successfully. May 15 00:32:31.638748 systemd[1]: run-netns-cni\x2dc3491b54\x2dfd5b\x2deea2\x2d1fb4\x2ddb7681b5a952.mount: Deactivated successfully. May 15 00:32:31.754874 systemd-networkd[1362]: calid1c2dc782dc: Link UP May 15 00:32:31.755189 systemd-networkd[1362]: calid1c2dc782dc: Gained carrier May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.591 [INFO][4854] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.618 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gzwnl-eth0 csi-node-driver- calico-system 21b8146b-053c-41d9-a1d1-bb9a962f2acc 941 0 2025-05-15 00:32:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gzwnl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid1c2dc782dc [] []}} ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.618 [INFO][4854] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.656 [INFO][4872] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" HandleID="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.723 [INFO][4872] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" HandleID="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e1260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gzwnl", "timestamp":"2025-05-15 00:32:31.656724768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.723 [INFO][4872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.723 [INFO][4872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.723 [INFO][4872] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.725 [INFO][4872] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.729 [INFO][4872] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.733 [INFO][4872] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.735 [INFO][4872] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.737 [INFO][4872] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.737 [INFO][4872] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.739 [INFO][4872] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984 May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.744 [INFO][4872] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.750 [INFO][4872] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.750 [INFO][4872] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" host="localhost" May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.750 [INFO][4872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:31.771879 containerd[1443]: 2025-05-15 00:32:31.750 [INFO][4872] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" HandleID="k8s-pod-network.f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.752 [INFO][4854] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzwnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b8146b-053c-41d9-a1d1-bb9a962f2acc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gzwnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c2dc782dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.752 [INFO][4854] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.752 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1c2dc782dc ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.754 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.755 [INFO][4854] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzwnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b8146b-053c-41d9-a1d1-bb9a962f2acc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984", Pod:"csi-node-driver-gzwnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c2dc782dc", MAC:"0e:e1:9f:64:15:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:31.772764 containerd[1443]: 2025-05-15 00:32:31.767 [INFO][4854] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984" Namespace="calico-system" Pod="csi-node-driver-gzwnl" WorkloadEndpoint="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:31.797026 containerd[1443]: time="2025-05-15T00:32:31.796922613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:32:31.797026 containerd[1443]: time="2025-05-15T00:32:31.796998773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:32:31.797026 containerd[1443]: time="2025-05-15T00:32:31.797019173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:31.797202 containerd[1443]: time="2025-05-15T00:32:31.797112092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:32:31.829529 systemd[1]: Started cri-containerd-f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984.scope - libcontainer container f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984. May 15 00:32:31.842825 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:32:31.856393 containerd[1443]: time="2025-05-15T00:32:31.856339131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gzwnl,Uid:21b8146b-053c-41d9-a1d1-bb9a962f2acc,Namespace:calico-system,Attempt:1,} returns sandbox id \"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984\"" May 15 00:32:31.858598 containerd[1443]: time="2025-05-15T00:32:31.858411964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:32:32.001490 systemd-networkd[1362]: cali352a74a91c3: Gained IPv6LL May 15 00:32:32.299769 kubelet[2463]: I0515 00:32:32.299668 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:32.300309 kubelet[2463]: E0515 00:32:32.300288 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:32.572093 kubelet[2463]: I0515 00:32:32.571978 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:32.572093 kubelet[2463]: I0515 00:32:32.572036 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:32.572706 kubelet[2463]: E0515 00:32:32.572635 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:32.572792 kubelet[2463]: E0515 00:32:32.572765 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:32.771374 systemd-networkd[1362]: cali5570e9389f8: Gained IPv6LL May 15 00:32:32.799311 kernel: bpftool[4976]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 00:32:33.003104 systemd-networkd[1362]: vxlan.calico: Link UP May 15 00:32:33.003120 systemd-networkd[1362]: vxlan.calico: Gained carrier May 15 00:32:33.137912 containerd[1443]: time="2025-05-15T00:32:33.137860735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:33.140693 containerd[1443]: time="2025-05-15T00:32:33.138517093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 00:32:33.140693 containerd[1443]: time="2025-05-15T00:32:33.139467370Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:33.143255 containerd[1443]: time="2025-05-15T00:32:33.142432801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:33.144075 containerd[1443]: time="2025-05-15T00:32:33.144035277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.285586753s" May 15 00:32:33.144075 containerd[1443]: time="2025-05-15T00:32:33.144072997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 00:32:33.171614 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:51836.service - OpenSSH per-connection server daemon (10.0.0.1:51836). May 15 00:32:33.174409 containerd[1443]: time="2025-05-15T00:32:33.174222227Z" level=info msg="CreateContainer within sandbox \"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:32:33.208370 containerd[1443]: time="2025-05-15T00:32:33.208077846Z" level=info msg="CreateContainer within sandbox \"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"79d29ba1a4df21fc257d7cd81c1e91b0f0bb1f52d0631aec79fa64cdb1aa4429\"" May 15 00:32:33.211347 containerd[1443]: time="2025-05-15T00:32:33.209448002Z" level=info msg="StartContainer for \"79d29ba1a4df21fc257d7cd81c1e91b0f0bb1f52d0631aec79fa64cdb1aa4429\"" May 15 00:32:33.251921 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 51836 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:33.253956 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:33.267508 systemd[1]: Started cri-containerd-79d29ba1a4df21fc257d7cd81c1e91b0f0bb1f52d0631aec79fa64cdb1aa4429.scope - libcontainer container 79d29ba1a4df21fc257d7cd81c1e91b0f0bb1f52d0631aec79fa64cdb1aa4429. May 15 00:32:33.277147 systemd-logind[1422]: New session 10 of user core. May 15 00:32:33.281441 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:32:33.321742 containerd[1443]: time="2025-05-15T00:32:33.321585708Z" level=info msg="StartContainer for \"79d29ba1a4df21fc257d7cd81c1e91b0f0bb1f52d0631aec79fa64cdb1aa4429\" returns successfully" May 15 00:32:33.323612 containerd[1443]: time="2025-05-15T00:32:33.323576262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:32:33.409467 systemd-networkd[1362]: calid1c2dc782dc: Gained IPv6LL May 15 00:32:33.518921 sshd[5047]: pam_unix(sshd:session): session closed for user core May 15 00:32:33.530972 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:51836.service: Deactivated successfully. May 15 00:32:33.533894 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:32:33.535211 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. May 15 00:32:33.541589 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). May 15 00:32:33.542436 systemd-logind[1422]: Removed session 10. May 15 00:32:33.575730 kubelet[2463]: E0515 00:32:33.575702 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:33.576072 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:33.577597 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:33.582843 systemd-logind[1422]: New session 11 of user core. May 15 00:32:33.591610 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:32:33.778131 sshd[5154]: pam_unix(sshd:session): session closed for user core May 15 00:32:33.787606 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:51840.service: Deactivated successfully. May 15 00:32:33.791636 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:32:33.794178 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. May 15 00:32:33.802108 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:51842.service - OpenSSH per-connection server daemon (10.0.0.1:51842). May 15 00:32:33.804753 systemd-logind[1422]: Removed session 11. May 15 00:32:33.842198 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 51842 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:33.843577 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:33.848780 systemd-logind[1422]: New session 12 of user core. May 15 00:32:33.854430 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:32:33.985422 sshd[5167]: pam_unix(sshd:session): session closed for user core May 15 00:32:33.989468 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:51842.service: Deactivated successfully. May 15 00:32:33.991372 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:32:33.991995 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. May 15 00:32:33.992758 systemd-logind[1422]: Removed session 12. May 15 00:32:34.817379 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL May 15 00:32:35.084534 containerd[1443]: time="2025-05-15T00:32:35.084413551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:35.086795 containerd[1443]: time="2025-05-15T00:32:35.086756185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 00:32:35.087439 containerd[1443]: time="2025-05-15T00:32:35.087399104Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:35.093017 containerd[1443]: time="2025-05-15T00:32:35.092971609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:32:35.094035 containerd[1443]: time="2025-05-15T00:32:35.093645767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.769897826s" May 15 00:32:35.094035 containerd[1443]: time="2025-05-15T00:32:35.093687167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 00:32:35.099155 containerd[1443]: time="2025-05-15T00:32:35.099097513Z" level=info msg="CreateContainer within sandbox \"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:32:35.117772 containerd[1443]: time="2025-05-15T00:32:35.117729824Z" level=info msg="CreateContainer within sandbox \"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"abf3b28bf825d21c6e18bf0acb34274c1565a996fa954cca3b2634063c16b4fb\"" May 15 00:32:35.118326 containerd[1443]: time="2025-05-15T00:32:35.118225543Z" level=info msg="StartContainer for \"abf3b28bf825d21c6e18bf0acb34274c1565a996fa954cca3b2634063c16b4fb\"" May 15 00:32:35.153466 systemd[1]: Started cri-containerd-abf3b28bf825d21c6e18bf0acb34274c1565a996fa954cca3b2634063c16b4fb.scope - libcontainer container abf3b28bf825d21c6e18bf0acb34274c1565a996fa954cca3b2634063c16b4fb. May 15 00:32:35.178879 containerd[1443]: time="2025-05-15T00:32:35.178828144Z" level=info msg="StartContainer for \"abf3b28bf825d21c6e18bf0acb34274c1565a996fa954cca3b2634063c16b4fb\" returns successfully" May 15 00:32:35.501710 kubelet[2463]: I0515 00:32:35.501663 2463 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:32:35.501710 kubelet[2463]: I0515 00:32:35.501718 2463 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:32:35.596067 kubelet[2463]: I0515 00:32:35.596004 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gzwnl" podStartSLOduration=28.358858175 podStartE2EDuration="31.595988132s" podCreationTimestamp="2025-05-15 00:32:04 +0000 UTC" firstStartedPulling="2025-05-15 00:32:31.858057606 +0000 UTC m=+39.515740241" lastFinishedPulling="2025-05-15 00:32:35.095187523 +0000 UTC m=+42.752870198" observedRunningTime="2025-05-15 00:32:35.594643535 +0000 UTC m=+43.252326210" watchObservedRunningTime="2025-05-15 00:32:35.595988132 +0000 UTC m=+43.253670807" May 15 00:32:38.998889 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:51856.service - OpenSSH per-connection server daemon (10.0.0.1:51856). May 15 00:32:39.044884 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 51856 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:39.046508 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:39.050465 systemd-logind[1422]: New session 13 of user core. May 15 00:32:39.060386 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:32:39.266467 sshd[5238]: pam_unix(sshd:session): session closed for user core May 15 00:32:39.274351 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:51856.service: Deactivated successfully. May 15 00:32:39.276965 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:32:39.277712 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. May 15 00:32:39.278855 systemd-logind[1422]: Removed session 13. May 15 00:32:43.297173 kubelet[2463]: I0515 00:32:43.297112 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:32:44.277162 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:36592.service - OpenSSH per-connection server daemon (10.0.0.1:36592). May 15 00:32:44.318301 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 36592 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:44.319755 sshd[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:44.323572 systemd-logind[1422]: New session 14 of user core. May 15 00:32:44.336472 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:32:44.474560 sshd[5300]: pam_unix(sshd:session): session closed for user core May 15 00:32:44.478089 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:36592.service: Deactivated successfully. May 15 00:32:44.479888 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:32:44.480743 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. May 15 00:32:44.481835 systemd-logind[1422]: Removed session 14. May 15 00:32:49.487908 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:36608.service - OpenSSH per-connection server daemon (10.0.0.1:36608). May 15 00:32:49.526994 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 36608 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:49.527461 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:49.534073 systemd-logind[1422]: New session 15 of user core. May 15 00:32:49.545498 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:32:49.682595 sshd[5316]: pam_unix(sshd:session): session closed for user core May 15 00:32:49.686920 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:36608.service: Deactivated successfully. May 15 00:32:49.689685 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:32:49.690922 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. May 15 00:32:49.695313 systemd-logind[1422]: Removed session 15. May 15 00:32:51.610327 kubelet[2463]: E0515 00:32:51.610227 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:32:52.405011 containerd[1443]: time="2025-05-15T00:32:52.404957263Z" level=info msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.471 [WARNING][5367] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j968j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d06094c1-9261-4b11-9614-1030ad3afe7f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0", Pod:"coredns-668d6bf9bc-j968j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a74a91c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.471 [INFO][5367] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.471 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" iface="eth0" netns="" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.471 [INFO][5367] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.471 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.493 [INFO][5377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.493 [INFO][5377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.493 [INFO][5377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.504 [WARNING][5377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.504 [INFO][5377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.505 [INFO][5377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.508694 containerd[1443]: 2025-05-15 00:32:52.507 [INFO][5367] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.509222 containerd[1443]: time="2025-05-15T00:32:52.508748212Z" level=info msg="TearDown network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" successfully" May 15 00:32:52.509222 containerd[1443]: time="2025-05-15T00:32:52.508785372Z" level=info msg="StopPodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" returns successfully" May 15 00:32:52.509620 containerd[1443]: time="2025-05-15T00:32:52.509545531Z" level=info msg="RemovePodSandbox for \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" May 15 00:32:52.514177 containerd[1443]: time="2025-05-15T00:32:52.513900527Z" level=info msg="Forcibly stopping sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\"" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.551 [WARNING][5399] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j968j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d06094c1-9261-4b11-9614-1030ad3afe7f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bef0aa152c70831a5435d4cd606147ef1a45a0582f7816b7e3e80bd003c55c0", Pod:"coredns-668d6bf9bc-j968j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a74a91c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.551 [INFO][5399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.551 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" iface="eth0" netns="" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.551 [INFO][5399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.551 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.570 [INFO][5408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.570 [INFO][5408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.570 [INFO][5408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.579 [WARNING][5408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.579 [INFO][5408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" HandleID="k8s-pod-network.320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" Workload="localhost-k8s-coredns--668d6bf9bc--j968j-eth0" May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.580 [INFO][5408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.585097 containerd[1443]: 2025-05-15 00:32:52.583 [INFO][5399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155" May 15 00:32:52.585692 containerd[1443]: time="2025-05-15T00:32:52.585126825Z" level=info msg="TearDown network for sandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" successfully" May 15 00:32:52.591144 containerd[1443]: time="2025-05-15T00:32:52.591104580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:52.591277 containerd[1443]: time="2025-05-15T00:32:52.591174460Z" level=info msg="RemovePodSandbox \"320ff434149e1380a33491d1872f0e85adf783d445d0e5eeddcf3cdcbeb6b155\" returns successfully" May 15 00:32:52.591671 containerd[1443]: time="2025-05-15T00:32:52.591644659Z" level=info msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.628 [WARNING][5431] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54", Pod:"coredns-668d6bf9bc-kmp6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali099bf4d3248", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.628 [INFO][5431] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.628 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" iface="eth0" netns="" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.628 [INFO][5431] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.628 [INFO][5431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.647 [INFO][5439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.647 [INFO][5439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.647 [INFO][5439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.655 [WARNING][5439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.656 [INFO][5439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.657 [INFO][5439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.660867 containerd[1443]: 2025-05-15 00:32:52.658 [INFO][5431] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.660867 containerd[1443]: time="2025-05-15T00:32:52.660832839Z" level=info msg="TearDown network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" successfully" May 15 00:32:52.660867 containerd[1443]: time="2025-05-15T00:32:52.660869359Z" level=info msg="StopPodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" returns successfully" May 15 00:32:52.662716 containerd[1443]: time="2025-05-15T00:32:52.662387598Z" level=info msg="RemovePodSandbox for \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" May 15 00:32:52.662716 containerd[1443]: time="2025-05-15T00:32:52.662428918Z" level=info msg="Forcibly stopping sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\"" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.698 [WARNING][5462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1f81d208-cbad-47fe-a5a2-d5e9a5c74af4", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 31, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"781b3d49dae39d0545d671b62ec5bbafa2ab720fbfdbf0f0f1db4a04e7775b54", Pod:"coredns-668d6bf9bc-kmp6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali099bf4d3248", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.698 [INFO][5462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.698 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" iface="eth0" netns="" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.698 [INFO][5462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.698 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.718 [INFO][5470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.718 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.718 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.726 [WARNING][5470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.727 [INFO][5470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" HandleID="k8s-pod-network.4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" Workload="localhost-k8s-coredns--668d6bf9bc--kmp6l-eth0" May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.728 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.732819 containerd[1443]: 2025-05-15 00:32:52.729 [INFO][5462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa" May 15 00:32:52.733267 containerd[1443]: time="2025-05-15T00:32:52.732856256Z" level=info msg="TearDown network for sandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" successfully" May 15 00:32:52.739135 containerd[1443]: time="2025-05-15T00:32:52.739097171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:52.739207 containerd[1443]: time="2025-05-15T00:32:52.739164530Z" level=info msg="RemovePodSandbox \"4183f67ecbc0e36a46cd4b949e7cead7b6215a799270446af879a82fc651d8aa\" returns successfully" May 15 00:32:52.739944 containerd[1443]: time="2025-05-15T00:32:52.739713530Z" level=info msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.778 [WARNING][5493] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"133536c7-a00d-44e3-b80c-3429e5cc650f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f", Pod:"calico-apiserver-7848fb646-fthrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f29b7b31db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.778 [INFO][5493] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.778 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" iface="eth0" netns="" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.778 [INFO][5493] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.778 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.798 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.798 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.798 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.810 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.810 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.814 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.817789 containerd[1443]: 2025-05-15 00:32:52.816 [INFO][5493] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.818828 containerd[1443]: time="2025-05-15T00:32:52.817809062Z" level=info msg="TearDown network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" successfully" May 15 00:32:52.818828 containerd[1443]: time="2025-05-15T00:32:52.817835342Z" level=info msg="StopPodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" returns successfully" May 15 00:32:52.818828 containerd[1443]: time="2025-05-15T00:32:52.818473181Z" level=info msg="RemovePodSandbox for \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" May 15 00:32:52.818828 containerd[1443]: time="2025-05-15T00:32:52.818504941Z" level=info msg="Forcibly stopping sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\"" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.854 [WARNING][5525] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"133536c7-a00d-44e3-b80c-3429e5cc650f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df21634eff3fe653f9cef4859132e31c8368b095942b754b76d8d48b8239884f", Pod:"calico-apiserver-7848fb646-fthrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f29b7b31db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.855 [INFO][5525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.855 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" iface="eth0" netns="" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.855 [INFO][5525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.855 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.874 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.874 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.874 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.883 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.883 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" HandleID="k8s-pod-network.366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" Workload="localhost-k8s-calico--apiserver--7848fb646--fthrc-eth0" May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.884 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.888097 containerd[1443]: 2025-05-15 00:32:52.886 [INFO][5525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda" May 15 00:32:52.888569 containerd[1443]: time="2025-05-15T00:32:52.888139760Z" level=info msg="TearDown network for sandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" successfully" May 15 00:32:52.902581 containerd[1443]: time="2025-05-15T00:32:52.902529548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:52.902705 containerd[1443]: time="2025-05-15T00:32:52.902600908Z" level=info msg="RemovePodSandbox \"366602d57fa7bafa18bf59fab6a4af741848f05aeecfd46323f7081a1cb3beda\" returns successfully" May 15 00:32:52.903354 containerd[1443]: time="2025-05-15T00:32:52.903063867Z" level=info msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.944 [WARNING][5556] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"f13eb429-1c6c-4728-8d7b-b418b49a379b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22", Pod:"calico-apiserver-7848fb646-krrw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5570e9389f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.944 [INFO][5556] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.944 [INFO][5556] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" iface="eth0" netns="" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.944 [INFO][5556] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.944 [INFO][5556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.964 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.964 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.964 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.973 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.973 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.974 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:52.982109 containerd[1443]: 2025-05-15 00:32:52.978 [INFO][5556] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:52.982109 containerd[1443]: time="2025-05-15T00:32:52.981814238Z" level=info msg="TearDown network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" successfully" May 15 00:32:52.982109 containerd[1443]: time="2025-05-15T00:32:52.981838518Z" level=info msg="StopPodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" returns successfully" May 15 00:32:52.982729 containerd[1443]: time="2025-05-15T00:32:52.982282478Z" level=info msg="RemovePodSandbox for \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" May 15 00:32:52.982729 containerd[1443]: time="2025-05-15T00:32:52.982310718Z" level=info msg="Forcibly stopping sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\"" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.020 [WARNING][5587] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0", GenerateName:"calico-apiserver-7848fb646-", Namespace:"calico-apiserver", SelfLink:"", UID:"f13eb429-1c6c-4728-8d7b-b418b49a379b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7848fb646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d143d53a4c1dfc51d082b9fd5331dc3300e65751ec63a2af062c1942f20cc22", Pod:"calico-apiserver-7848fb646-krrw9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5570e9389f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.020 [INFO][5587] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.020 [INFO][5587] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" iface="eth0" netns="" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.020 [INFO][5587] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.020 [INFO][5587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.040 [INFO][5595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.040 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.040 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.050 [WARNING][5595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.050 [INFO][5595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" HandleID="k8s-pod-network.f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" Workload="localhost-k8s-calico--apiserver--7848fb646--krrw9-eth0" May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.051 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:53.054845 containerd[1443]: 2025-05-15 00:32:53.053 [INFO][5587] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae" May 15 00:32:53.055293 containerd[1443]: time="2025-05-15T00:32:53.054877977Z" level=info msg="TearDown network for sandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" successfully" May 15 00:32:53.065831 containerd[1443]: time="2025-05-15T00:32:53.065739529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:53.065831 containerd[1443]: time="2025-05-15T00:32:53.065812408Z" level=info msg="RemovePodSandbox \"f5bac8a401845a0dfa6f1fa14800654fcd3ffd06e16d44711af076e3c24098ae\" returns successfully" May 15 00:32:53.066379 containerd[1443]: time="2025-05-15T00:32:53.066347688Z" level=info msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.101 [WARNING][5617] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0", GenerateName:"calico-kube-controllers-6bd5fd459d-", Namespace:"calico-system", SelfLink:"", UID:"a015a20f-6c2f-4521-a3f8-af0d5074817c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd5fd459d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5", Pod:"calico-kube-controllers-6bd5fd459d-c9qd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb4e30888c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.101 [INFO][5617] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.101 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" iface="eth0" netns="" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.101 [INFO][5617] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.101 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.121 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.121 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.121 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.134 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.134 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.136 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:53.139986 containerd[1443]: 2025-05-15 00:32:53.137 [INFO][5617] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.139986 containerd[1443]: time="2025-05-15T00:32:53.139867628Z" level=info msg="TearDown network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" successfully" May 15 00:32:53.139986 containerd[1443]: time="2025-05-15T00:32:53.139893028Z" level=info msg="StopPodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" returns successfully" May 15 00:32:53.140657 containerd[1443]: time="2025-05-15T00:32:53.140377907Z" level=info msg="RemovePodSandbox for \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" May 15 00:32:53.140657 containerd[1443]: time="2025-05-15T00:32:53.140409507Z" level=info msg="Forcibly stopping sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\"" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.180 [WARNING][5648] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0", GenerateName:"calico-kube-controllers-6bd5fd459d-", Namespace:"calico-system", SelfLink:"", UID:"a015a20f-6c2f-4521-a3f8-af0d5074817c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bd5fd459d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16aa99274a5c148e9eb4f2be1c883f3f9da6f5f7242c3c6cb0ce151ae0b72c5", Pod:"calico-kube-controllers-6bd5fd459d-c9qd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb4e30888c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.181 [INFO][5648] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.181 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" iface="eth0" netns="" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.181 [INFO][5648] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.181 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.199 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.199 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.199 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.209 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.209 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" HandleID="k8s-pod-network.40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" Workload="localhost-k8s-calico--kube--controllers--6bd5fd459d--c9qd4-eth0" May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.210 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:53.213934 containerd[1443]: 2025-05-15 00:32:53.212 [INFO][5648] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093" May 15 00:32:53.213934 containerd[1443]: time="2025-05-15T00:32:53.213899287Z" level=info msg="TearDown network for sandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" successfully" May 15 00:32:53.216675 containerd[1443]: time="2025-05-15T00:32:53.216645845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:53.216763 containerd[1443]: time="2025-05-15T00:32:53.216700085Z" level=info msg="RemovePodSandbox \"40586e2a95e7330796519ed7225c5424711fa037d3880a74c36dec044c18b093\" returns successfully" May 15 00:32:53.217529 containerd[1443]: time="2025-05-15T00:32:53.217191244Z" level=info msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.252 [WARNING][5679] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzwnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b8146b-053c-41d9-a1d1-bb9a962f2acc", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984", Pod:"csi-node-driver-gzwnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c2dc782dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.252 [INFO][5679] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.252 [INFO][5679] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" iface="eth0" netns="" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.252 [INFO][5679] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.252 [INFO][5679] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.272 [INFO][5687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.272 [INFO][5687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.272 [INFO][5687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.281 [WARNING][5687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.281 [INFO][5687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.282 [INFO][5687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:53.286405 containerd[1443]: 2025-05-15 00:32:53.284 [INFO][5679] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.287225 containerd[1443]: time="2025-05-15T00:32:53.286507708Z" level=info msg="TearDown network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" successfully" May 15 00:32:53.287225 containerd[1443]: time="2025-05-15T00:32:53.286534388Z" level=info msg="StopPodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" returns successfully" May 15 00:32:53.287225 containerd[1443]: time="2025-05-15T00:32:53.286975867Z" level=info msg="RemovePodSandbox for \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" May 15 00:32:53.287225 containerd[1443]: time="2025-05-15T00:32:53.287005147Z" level=info msg="Forcibly stopping sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\"" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.326 [WARNING][5710] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gzwnl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21b8146b-053c-41d9-a1d1-bb9a962f2acc", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 32, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f14cfa501a437fd344ac566b9a7a76ba30c4e853c6e5648516e05d2f130a2984", Pod:"csi-node-driver-gzwnl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1c2dc782dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.326 [INFO][5710] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.326 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" iface="eth0" netns="" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.326 [INFO][5710] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.326 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.347 [INFO][5724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.347 [INFO][5724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.347 [INFO][5724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.355 [WARNING][5724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.355 [INFO][5724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" HandleID="k8s-pod-network.4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" Workload="localhost-k8s-csi--node--driver--gzwnl-eth0" May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.357 [INFO][5724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:32:53.360749 containerd[1443]: 2025-05-15 00:32:53.359 [INFO][5710] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21" May 15 00:32:53.360749 containerd[1443]: time="2025-05-15T00:32:53.360744287Z" level=info msg="TearDown network for sandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" successfully" May 15 00:32:53.363357 containerd[1443]: time="2025-05-15T00:32:53.363326925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:32:53.363427 containerd[1443]: time="2025-05-15T00:32:53.363383765Z" level=info msg="RemovePodSandbox \"4f2862cc450b6c8c32f41a6a05ca98d55660ed6b64dbdb50132d27ad2be1ca21\" returns successfully" May 15 00:32:54.693988 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:59396.service - OpenSSH per-connection server daemon (10.0.0.1:59396). May 15 00:32:54.734920 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 59396 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:54.736196 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:54.739979 systemd-logind[1422]: New session 16 of user core. May 15 00:32:54.753441 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:32:54.942317 sshd[5735]: pam_unix(sshd:session): session closed for user core May 15 00:32:54.954307 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:59396.service: Deactivated successfully. May 15 00:32:54.956296 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:32:54.957765 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. May 15 00:32:54.960554 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:59404.service - OpenSSH per-connection server daemon (10.0.0.1:59404). May 15 00:32:54.961669 systemd-logind[1422]: Removed session 16. May 15 00:32:55.000059 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 59404 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:55.000928 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:55.004845 systemd-logind[1422]: New session 17 of user core. May 15 00:32:55.018436 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:32:55.256567 sshd[5749]: pam_unix(sshd:session): session closed for user core May 15 00:32:55.269378 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:59404.service: Deactivated successfully. May 15 00:32:55.271300 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:32:55.272715 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. May 15 00:32:55.274340 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:59416.service - OpenSSH per-connection server daemon (10.0.0.1:59416). May 15 00:32:55.275636 systemd-logind[1422]: Removed session 17. May 15 00:32:55.317210 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 59416 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:55.318625 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:55.323415 systemd-logind[1422]: New session 18 of user core. May 15 00:32:55.328404 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:32:56.103913 sshd[5761]: pam_unix(sshd:session): session closed for user core May 15 00:32:56.113820 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:59416.service: Deactivated successfully. May 15 00:32:56.120635 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:32:56.123329 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. May 15 00:32:56.128542 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:59432.service - OpenSSH per-connection server daemon (10.0.0.1:59432). May 15 00:32:56.131026 systemd-logind[1422]: Removed session 18. May 15 00:32:56.163746 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 59432 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:56.165352 sshd[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:56.172001 systemd-logind[1422]: New session 19 of user core. May 15 00:32:56.175393 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:32:56.482265 sshd[5780]: pam_unix(sshd:session): session closed for user core May 15 00:32:56.492722 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:59432.service: Deactivated successfully. May 15 00:32:56.494880 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:32:56.496345 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. May 15 00:32:56.508582 systemd[1]: Started sshd@19-10.0.0.130:22-10.0.0.1:59434.service - OpenSSH per-connection server daemon (10.0.0.1:59434). May 15 00:32:56.509580 systemd-logind[1422]: Removed session 19. May 15 00:32:56.545579 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 59434 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:32:56.546428 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:32:56.551528 systemd-logind[1422]: New session 20 of user core. May 15 00:32:56.562438 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:32:56.700574 sshd[5794]: pam_unix(sshd:session): session closed for user core May 15 00:32:56.703795 systemd[1]: sshd@19-10.0.0.130:22-10.0.0.1:59434.service: Deactivated successfully. May 15 00:32:56.705549 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:32:56.706215 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. May 15 00:32:56.707000 systemd-logind[1422]: Removed session 20. May 15 00:33:01.714031 systemd[1]: Started sshd@20-10.0.0.130:22-10.0.0.1:59440.service - OpenSSH per-connection server daemon (10.0.0.1:59440). May 15 00:33:01.751824 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 59440 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:33:01.753077 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:33:01.756644 systemd-logind[1422]: New session 21 of user core. May 15 00:33:01.769417 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:33:01.892094 sshd[5815]: pam_unix(sshd:session): session closed for user core May 15 00:33:01.895553 systemd[1]: sshd@20-10.0.0.130:22-10.0.0.1:59440.service: Deactivated successfully. May 15 00:33:01.897942 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:33:01.898746 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. May 15 00:33:01.899729 systemd-logind[1422]: Removed session 21. May 15 00:33:06.069509 kubelet[2463]: I0515 00:33:06.069367 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:33:06.430143 kubelet[2463]: E0515 00:33:06.430113 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:33:06.903877 systemd[1]: Started sshd@21-10.0.0.130:22-10.0.0.1:55628.service - OpenSSH per-connection server daemon (10.0.0.1:55628). May 15 00:33:06.952546 sshd[5832]: Accepted publickey for core from 10.0.0.1 port 55628 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:33:06.954282 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:33:06.958262 systemd-logind[1422]: New session 22 of user core. May 15 00:33:06.969521 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:33:07.186708 sshd[5832]: pam_unix(sshd:session): session closed for user core May 15 00:33:07.192651 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. May 15 00:33:07.193493 systemd[1]: sshd@21-10.0.0.130:22-10.0.0.1:55628.service: Deactivated successfully. May 15 00:33:07.197235 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:33:07.203594 systemd-logind[1422]: Removed session 22. May 15 00:33:08.829724 kubelet[2463]: I0515 00:33:08.829683 2463 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:33:09.427679 kubelet[2463]: E0515 00:33:09.427599 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:33:11.427650 kubelet[2463]: E0515 00:33:11.427612 2463 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:33:12.199644 systemd[1]: Started sshd@22-10.0.0.130:22-10.0.0.1:55634.service - OpenSSH per-connection server daemon (10.0.0.1:55634). May 15 00:33:12.241851 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 55634 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:33:12.243060 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:33:12.246840 systemd-logind[1422]: New session 23 of user core. May 15 00:33:12.257890 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:33:12.424564 sshd[5848]: pam_unix(sshd:session): session closed for user core May 15 00:33:12.427098 systemd[1]: sshd@22-10.0.0.130:22-10.0.0.1:55634.service: Deactivated successfully. May 15 00:33:12.429875 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:33:12.431548 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. May 15 00:33:12.432650 systemd-logind[1422]: Removed session 23.