May 14 23:52:25.924574 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:52:25.924597 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 22:17:23 -00 2025 May 14 23:52:25.924608 kernel: KASLR enabled May 14 23:52:25.924614 kernel: efi: EFI v2.7 by EDK II May 14 23:52:25.924619 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 14 23:52:25.924625 kernel: random: crng init done May 14 23:52:25.924632 kernel: secureboot: Secure boot disabled May 14 23:52:25.924638 kernel: ACPI: Early table checksum verification disabled May 14 23:52:25.924644 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 14 23:52:25.924651 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 23:52:25.924658 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924664 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924670 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924676 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924684 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924692 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924698 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924705 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924711 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:52:25.924725 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 23:52:25.924732 kernel: NUMA: Failed to initialise from firmware May 14 23:52:25.924738 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:52:25.924745 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] May 14 23:52:25.924751 kernel: Zone ranges: May 14 23:52:25.924758 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:52:25.924765 kernel: DMA32 empty May 14 23:52:25.924772 kernel: Normal empty May 14 23:52:25.924778 kernel: Movable zone start for each node May 14 23:52:25.924786 kernel: Early memory node ranges May 14 23:52:25.924795 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 14 23:52:25.924804 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 14 23:52:25.924824 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 14 23:52:25.924845 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 14 23:52:25.924852 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 14 23:52:25.924859 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 23:52:25.924865 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 23:52:25.924871 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 23:52:25.924880 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 23:52:25.924887 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:52:25.924893 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 23:52:25.924903 kernel: psci: probing for conduit method from ACPI. May 14 23:52:25.924910 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:52:25.924916 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:52:25.924924 kernel: psci: Trusted OS migration not required May 14 23:52:25.924931 kernel: psci: SMC Calling Convention v1.1 May 14 23:52:25.924937 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 23:52:25.924944 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:52:25.924951 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:52:25.924958 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 23:52:25.924964 kernel: Detected PIPT I-cache on CPU0 May 14 23:52:25.924971 kernel: CPU features: detected: GIC system register CPU interface May 14 23:52:25.924978 kernel: CPU features: detected: Hardware dirty bit management May 14 23:52:25.924984 kernel: CPU features: detected: Spectre-v4 May 14 23:52:25.924992 kernel: CPU features: detected: Spectre-BHB May 14 23:52:25.924999 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:52:25.925006 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:52:25.925012 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:52:25.925020 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:52:25.925026 kernel: alternatives: applying boot alternatives May 14 23:52:25.925034 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e480c7900a171de0fa6cd5a3274267ba91118ae5fbe1e4dae15bc86928fa4899 May 14 23:52:25.925041 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:52:25.925048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:52:25.925055 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:52:25.925061 kernel: Fallback order for Node 0: 0 May 14 23:52:25.925070 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 23:52:25.925076 kernel: Policy zone: DMA May 14 23:52:25.925083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:52:25.925089 kernel: software IO TLB: area num 4. May 14 23:52:25.925096 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 14 23:52:25.925103 kernel: Memory: 2387356K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184932K reserved, 0K cma-reserved) May 14 23:52:25.925110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:52:25.925116 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:52:25.925123 kernel: rcu: RCU event tracing is enabled. May 14 23:52:25.925130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:52:25.925137 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:52:25.925144 kernel: Tracing variant of Tasks RCU enabled. May 14 23:52:25.925152 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:52:25.925159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:52:25.925166 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:52:25.925172 kernel: GICv3: 256 SPIs implemented May 14 23:52:25.925178 kernel: GICv3: 0 Extended SPIs implemented May 14 23:52:25.925185 kernel: Root IRQ handler: gic_handle_irq May 14 23:52:25.925191 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:52:25.925198 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 23:52:25.925204 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 23:52:25.925211 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:52:25.925218 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 14 23:52:25.925226 kernel: GICv3: using LPI property table @0x00000000400f0000 May 14 23:52:25.925233 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 14 23:52:25.925239 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:52:25.925246 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:52:25.925253 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:52:25.925260 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:52:25.925267 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:52:25.925273 kernel: arm-pv: using stolen time PV May 14 23:52:25.925280 kernel: Console: colour dummy device 80x25 May 14 23:52:25.925287 kernel: ACPI: Core revision 20230628 May 14 23:52:25.925294 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:52:25.925302 kernel: pid_max: default: 32768 minimum: 301 May 14 23:52:25.925309 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:52:25.925316 kernel: landlock: Up and running. May 14 23:52:25.925323 kernel: SELinux: Initializing. May 14 23:52:25.925330 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:52:25.925337 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:52:25.925344 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 23:52:25.925351 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:52:25.925357 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:52:25.925366 kernel: rcu: Hierarchical SRCU implementation. May 14 23:52:25.925373 kernel: rcu: Max phase no-delay instances is 400. May 14 23:52:25.925379 kernel: Platform MSI: ITS@0x8080000 domain created May 14 23:52:25.925386 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 23:52:25.925393 kernel: Remapping and enabling EFI services. May 14 23:52:25.925399 kernel: smp: Bringing up secondary CPUs ... May 14 23:52:25.925406 kernel: Detected PIPT I-cache on CPU1 May 14 23:52:25.925413 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 23:52:25.925420 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 14 23:52:25.925428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:52:25.925435 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:52:25.925448 kernel: Detected PIPT I-cache on CPU2 May 14 23:52:25.925457 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 23:52:25.925464 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 14 23:52:25.925471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:52:25.925479 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 23:52:25.925486 kernel: Detected PIPT I-cache on CPU3 May 14 23:52:25.925493 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 23:52:25.925501 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 14 23:52:25.925510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:52:25.925516 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 23:52:25.925523 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:52:25.925530 kernel: SMP: Total of 4 processors activated. May 14 23:52:25.925537 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:52:25.925545 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:52:25.925552 kernel: CPU features: detected: Common not Private translations May 14 23:52:25.925561 kernel: CPU features: detected: CRC32 instructions May 14 23:52:25.925569 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 23:52:25.925576 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:52:25.925583 kernel: CPU features: detected: LSE atomic instructions May 14 23:52:25.925590 kernel: CPU features: detected: Privileged Access Never May 14 23:52:25.925598 kernel: CPU features: detected: RAS Extension Support May 14 23:52:25.925605 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 23:52:25.925612 kernel: CPU: All CPU(s) started at EL1 May 14 23:52:25.925619 kernel: alternatives: applying system-wide alternatives May 14 23:52:25.925628 kernel: devtmpfs: initialized May 14 23:52:25.925635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:52:25.925642 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:52:25.925649 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:52:25.925656 kernel: SMBIOS 3.0.0 present. May 14 23:52:25.925663 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 23:52:25.925670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:52:25.925677 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:52:25.925684 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:52:25.925693 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:52:25.925700 kernel: audit: initializing netlink subsys (disabled) May 14 23:52:25.925708 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 14 23:52:25.925715 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:52:25.925728 kernel: cpuidle: using governor menu May 14 23:52:25.925735 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:52:25.925742 kernel: ASID allocator initialised with 32768 entries May 14 23:52:25.925750 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:52:25.925757 kernel: Serial: AMBA PL011 UART driver May 14 23:52:25.925766 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:52:25.925773 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:52:25.925780 kernel: Modules: 509232 pages in range for PLT usage May 14 23:52:25.925787 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:52:25.925797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:52:25.925804 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:52:25.925876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:52:25.925887 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:52:25.925894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:52:25.925906 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:52:25.925914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:52:25.925921 kernel: ACPI: Added _OSI(Module Device) May 14 23:52:25.925928 kernel: ACPI: Added _OSI(Processor Device) May 14 23:52:25.925935 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:52:25.925943 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:52:25.925950 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:52:25.925957 kernel: ACPI: Interpreter enabled May 14 23:52:25.925964 kernel: ACPI: Using GIC for interrupt routing May 14 23:52:25.925971 kernel: ACPI: MCFG table detected, 1 entries May 14 23:52:25.925980 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 23:52:25.925987 kernel: printk: console [ttyAMA0] enabled May 14 23:52:25.925994 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:52:25.926136 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:52:25.926210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:52:25.926277 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:52:25.926342 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 23:52:25.926410 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 23:52:25.926419 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 23:52:25.926427 kernel: PCI host bridge to bus 0000:00 May 14 23:52:25.926499 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 23:52:25.926560 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:52:25.926624 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 23:52:25.926683 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:52:25.926779 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 23:52:25.926891 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:52:25.926965 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 23:52:25.927034 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 23:52:25.927101 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:52:25.927168 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:52:25.927235 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 23:52:25.927311 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 23:52:25.927377 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 23:52:25.927439 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:52:25.927512 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 23:52:25.927521 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:52:25.927529 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:52:25.927536 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:52:25.927546 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:52:25.927553 kernel: iommu: Default domain type: Translated May 14 23:52:25.927560 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:52:25.927567 kernel: efivars: Registered efivars operations May 14 23:52:25.927574 kernel: vgaarb: loaded May 14 23:52:25.927582 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:52:25.927589 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:52:25.927596 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:52:25.927603 kernel: pnp: PnP ACPI init May 14 23:52:25.927677 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 23:52:25.927687 kernel: pnp: PnP ACPI: found 1 devices May 14 23:52:25.927694 kernel: NET: Registered PF_INET protocol family May 14 23:52:25.927701 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:52:25.927708 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:52:25.927716 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:52:25.927742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:52:25.927749 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:52:25.927759 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:52:25.927767 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:52:25.927774 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:52:25.927781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:52:25.927788 kernel: PCI: CLS 0 bytes, default 64 May 14 23:52:25.927795 kernel: kvm [1]: HYP mode not available May 14 23:52:25.927802 kernel: Initialise system trusted keyrings May 14 23:52:25.927837 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:52:25.927847 kernel: Key type asymmetric registered May 14 23:52:25.927857 kernel: Asymmetric key parser 'x509' registered May 14 23:52:25.927864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:52:25.927871 kernel: io scheduler mq-deadline registered May 14 23:52:25.927878 kernel: io scheduler kyber registered May 14 23:52:25.927885 kernel: io scheduler bfq registered May 14 23:52:25.927893 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:52:25.927900 kernel: ACPI: button: Power Button [PWRB] May 14 23:52:25.927907 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:52:25.927986 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 23:52:25.927998 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:52:25.928006 kernel: thunder_xcv, ver 1.0 May 14 23:52:25.928013 kernel: thunder_bgx, ver 1.0 May 14 23:52:25.928020 kernel: nicpf, ver 1.0 May 14 23:52:25.928027 kernel: nicvf, ver 1.0 May 14 23:52:25.928104 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:52:25.928165 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:52:25 UTC (1747266745) May 14 23:52:25.928174 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:52:25.928184 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 23:52:25.928191 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:52:25.928198 kernel: watchdog: Hard watchdog permanently disabled May 14 23:52:25.928205 kernel: NET: Registered PF_INET6 protocol family May 14 23:52:25.928212 kernel: Segment Routing with IPv6 May 14 23:52:25.928219 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:52:25.928226 kernel: NET: Registered PF_PACKET protocol family May 14 23:52:25.928233 kernel: Key type dns_resolver registered May 14 23:52:25.928240 kernel: registered taskstats version 1 May 14 23:52:25.928247 kernel: Loading compiled-in X.509 certificates May 14 23:52:25.928256 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02701f8a00afe25f5dd35b2d52090aece02392ec' May 14 23:52:25.928263 kernel: Key type .fscrypt registered May 14 23:52:25.928270 kernel: Key type fscrypt-provisioning registered May 14 23:52:25.928277 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:52:25.928284 kernel: ima: Allocated hash algorithm: sha1 May 14 23:52:25.928292 kernel: ima: No architecture policies found May 14 23:52:25.928299 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:52:25.928306 kernel: clk: Disabling unused clocks May 14 23:52:25.928314 kernel: Freeing unused kernel memory: 38464K May 14 23:52:25.928321 kernel: Run /init as init process May 14 23:52:25.928328 kernel: with arguments: May 14 23:52:25.928335 kernel: /init May 14 23:52:25.928342 kernel: with environment: May 14 23:52:25.928349 kernel: HOME=/ May 14 23:52:25.928356 kernel: TERM=linux May 14 23:52:25.928363 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:52:25.928371 systemd[1]: Successfully made /usr/ read-only. May 14 23:52:25.928383 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:52:25.928391 systemd[1]: Detected virtualization kvm. May 14 23:52:25.928399 systemd[1]: Detected architecture arm64. May 14 23:52:25.928406 systemd[1]: Running in initrd. May 14 23:52:25.928414 systemd[1]: No hostname configured, using default hostname. May 14 23:52:25.928421 systemd[1]: Hostname set to . May 14 23:52:25.928429 systemd[1]: Initializing machine ID from VM UUID. May 14 23:52:25.928438 systemd[1]: Queued start job for default target initrd.target. May 14 23:52:25.928446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:25.928454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:25.928462 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:52:25.928470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:52:25.928478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:52:25.928487 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:52:25.928498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:52:25.928506 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:52:25.928514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:25.928522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:25.928530 systemd[1]: Reached target paths.target - Path Units. May 14 23:52:25.928537 systemd[1]: Reached target slices.target - Slice Units. May 14 23:52:25.928545 systemd[1]: Reached target swap.target - Swaps. May 14 23:52:25.928553 systemd[1]: Reached target timers.target - Timer Units. May 14 23:52:25.928561 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:52:25.928571 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:52:25.928578 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:52:25.928586 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:52:25.928594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:25.928602 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:52:25.928610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:25.928618 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:52:25.928625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:52:25.928635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:52:25.928643 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:52:25.928651 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:52:25.928658 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:52:25.928666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:52:25.928674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:25.928682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:25.928690 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:52:25.928700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:52:25.928708 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:52:25.928716 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:25.928732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:52:25.928740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:25.928767 systemd-journald[235]: Collecting audit messages is disabled. May 14 23:52:25.928786 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:52:25.928794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:25.928801 kernel: Bridge firewalling registered May 14 23:52:25.928821 systemd-journald[235]: Journal started May 14 23:52:25.928852 systemd-journald[235]: Runtime Journal (/run/log/journal/26e8c877de0941fbaef790b441eedad8) is 5.9M, max 47.3M, 41.4M free. May 14 23:52:25.907030 systemd-modules-load[238]: Inserted module 'overlay' May 14 23:52:25.927281 systemd-modules-load[238]: Inserted module 'br_netfilter' May 14 23:52:25.931868 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:52:25.936093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:52:25.937087 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:25.942004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:25.943381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:52:25.953802 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:25.956782 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:52:25.957702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:25.959305 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:25.962711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:52:25.971339 dracut-cmdline[276]: dracut-dracut-053 May 14 23:52:25.974326 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e480c7900a171de0fa6cd5a3274267ba91118ae5fbe1e4dae15bc86928fa4899 May 14 23:52:26.006165 systemd-resolved[280]: Positive Trust Anchors: May 14 23:52:26.006189 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:52:26.006221 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:52:26.013900 systemd-resolved[280]: Defaulting to hostname 'linux'. May 14 23:52:26.015011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:52:26.016001 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:26.069846 kernel: SCSI subsystem initialized May 14 23:52:26.074826 kernel: Loading iSCSI transport class v2.0-870. May 14 23:52:26.087860 kernel: iscsi: registered transport (tcp) May 14 23:52:26.102013 kernel: iscsi: registered transport (qla4xxx) May 14 23:52:26.102045 kernel: QLogic iSCSI HBA Driver May 14 23:52:26.165226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:52:26.167605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:52:26.205698 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:52:26.205770 kernel: device-mapper: uevent: version 1.0.3 May 14 23:52:26.206999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:52:26.258841 kernel: raid6: neonx8 gen() 15774 MB/s May 14 23:52:26.275837 kernel: raid6: neonx4 gen() 15760 MB/s May 14 23:52:26.292846 kernel: raid6: neonx2 gen() 13012 MB/s May 14 23:52:26.309833 kernel: raid6: neonx1 gen() 10529 MB/s May 14 23:52:26.326827 kernel: raid6: int64x8 gen() 6795 MB/s May 14 23:52:26.343828 kernel: raid6: int64x4 gen() 7344 MB/s May 14 23:52:26.360828 kernel: raid6: int64x2 gen() 6093 MB/s May 14 23:52:26.377828 kernel: raid6: int64x1 gen() 5050 MB/s May 14 23:52:26.377843 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s May 14 23:52:26.394841 kernel: raid6: .... xor() 11949 MB/s, rmw enabled May 14 23:52:26.394860 kernel: raid6: using neon recovery algorithm May 14 23:52:26.399833 kernel: xor: measuring software checksum speed May 14 23:52:26.399858 kernel: 8regs : 19376 MB/sec May 14 23:52:26.400826 kernel: 32regs : 21710 MB/sec May 14 23:52:26.401827 kernel: arm64_neon : 26138 MB/sec May 14 23:52:26.401839 kernel: xor: using function: arm64_neon (26138 MB/sec) May 14 23:52:26.457856 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:52:26.471679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:52:26.474807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:26.500031 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 14 23:52:26.503987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:26.506349 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:52:26.532865 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 14 23:52:26.560174 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:52:26.562217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:52:26.620863 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:26.622970 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:52:26.646872 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:52:26.648096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:52:26.651117 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:26.653968 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:52:26.655555 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:52:26.675123 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 23:52:26.675503 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:52:26.680807 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:52:26.682125 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:52:26.682143 kernel: GPT:9289727 != 19775487 May 14 23:52:26.682152 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:52:26.682161 kernel: GPT:9289727 != 19775487 May 14 23:52:26.682169 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:52:26.682177 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:26.684204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:52:26.684314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:26.687616 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:26.691671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:52:26.691853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:26.695394 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:26.699914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:26.703570 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) May 14 23:52:26.703592 kernel: BTRFS: device fsid 6bfb3c95-7a9f-4285-9600-0ba5e7814f96 devid 1 transid 47 /dev/vda3 scanned by (udev-worker) (518) May 14 23:52:26.717527 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:52:26.719668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:26.733785 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:52:26.744372 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:52:26.745405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:52:26.754344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:52:26.756132 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:52:26.757685 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:52:26.779334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:26.859271 disk-uuid[551]: Primary Header is updated. May 14 23:52:26.859271 disk-uuid[551]: Secondary Entries is updated. May 14 23:52:26.859271 disk-uuid[551]: Secondary Header is updated. May 14 23:52:26.864445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:27.879449 disk-uuid[560]: The operation has completed successfully. May 14 23:52:27.880375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:52:27.903211 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:52:27.903338 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:52:27.931866 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:52:27.948722 sh[569]: Success May 14 23:52:27.963891 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:52:27.993400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:52:27.995985 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:52:28.012123 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:52:28.018326 kernel: BTRFS info (device dm-0): first mount of filesystem 6bfb3c95-7a9f-4285-9600-0ba5e7814f96 May 14 23:52:28.018362 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:52:28.018373 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:52:28.020137 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:52:28.020153 kernel: BTRFS info (device dm-0): using free space tree May 14 23:52:28.023750 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:52:28.024959 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:52:28.025720 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:52:28.027561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:52:28.046160 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 14 23:52:28.046216 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:52:28.046234 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:28.048951 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:28.052845 kernel: BTRFS info (device vda6): last unmount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 14 23:52:28.056453 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:52:28.058423 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:52:28.128428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:52:28.132985 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:52:28.165785 ignition[658]: Ignition 2.20.0 May 14 23:52:28.165795 ignition[658]: Stage: fetch-offline May 14 23:52:28.165842 ignition[658]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:28.165850 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:28.166015 ignition[658]: parsed url from cmdline: "" May 14 23:52:28.166018 ignition[658]: no config URL provided May 14 23:52:28.166022 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:52:28.166029 ignition[658]: no config at "/usr/lib/ignition/user.ign" May 14 23:52:28.166050 ignition[658]: op(1): [started] loading QEMU firmware config module May 14 23:52:28.166055 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:52:28.175653 ignition[658]: op(1): [finished] loading QEMU firmware config module May 14 23:52:28.178681 systemd-networkd[757]: lo: Link UP May 14 23:52:28.178689 systemd-networkd[757]: lo: Gained carrier May 14 23:52:28.179476 systemd-networkd[757]: Enumeration completed May 14 23:52:28.179579 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:52:28.180043 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:28.180047 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:52:28.181039 systemd-networkd[757]: eth0: Link UP May 14 23:52:28.181042 systemd-networkd[757]: eth0: Gained carrier May 14 23:52:28.181049 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:28.181500 systemd[1]: Reached target network.target - Network. May 14 23:52:28.195871 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:52:28.220755 ignition[658]: parsing config with SHA512: 0244d6cd7901bc2f0126b02c7a0208e3ed35d26988b7a93a5c3ace7525653dcd015d773754132cf6d54ca5f1555b03c2253fbc020602c15af5c42b8cd4b570f8 May 14 23:52:28.227581 unknown[658]: fetched base config from "system" May 14 23:52:28.227591 unknown[658]: fetched user config from "qemu" May 14 23:52:28.228048 ignition[658]: fetch-offline: fetch-offline passed May 14 23:52:28.229790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:52:28.228115 ignition[658]: Ignition finished successfully May 14 23:52:28.231190 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:52:28.231906 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:52:28.252785 ignition[767]: Ignition 2.20.0 May 14 23:52:28.252795 ignition[767]: Stage: kargs May 14 23:52:28.252966 ignition[767]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:28.252976 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:28.253837 ignition[767]: kargs: kargs passed May 14 23:52:28.253880 ignition[767]: Ignition finished successfully May 14 23:52:28.255686 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:52:28.257783 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:52:28.283888 ignition[775]: Ignition 2.20.0 May 14 23:52:28.283897 ignition[775]: Stage: disks May 14 23:52:28.284061 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 14 23:52:28.284070 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:28.286148 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:52:28.284952 ignition[775]: disks: disks passed May 14 23:52:28.287067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:52:28.284995 ignition[775]: Ignition finished successfully May 14 23:52:28.288419 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:52:28.289788 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:52:28.290929 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:52:28.292244 systemd[1]: Reached target basic.target - Basic System. May 14 23:52:28.294131 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:52:28.322160 systemd-fsck[785]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:52:28.325544 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:52:28.328101 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:52:28.381827 kernel: EXT4-fs (vda9): mounted filesystem ef34f074-e751-474e-98f6-0625809ada62 r/w with ordered data mode. Quota mode: none. May 14 23:52:28.382365 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:52:28.383385 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:52:28.385214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:52:28.386551 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:52:28.387335 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:52:28.387372 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:52:28.387391 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:52:28.397461 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:52:28.400443 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:52:28.402640 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (793) May 14 23:52:28.404444 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 14 23:52:28.404461 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:52:28.404471 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:28.406916 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:28.408386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:52:28.441850 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:52:28.446595 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory May 14 23:52:28.451171 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:52:28.454563 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:52:28.527573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:52:28.529652 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:52:28.531169 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:52:28.551832 kernel: BTRFS info (device vda6): last unmount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 14 23:52:28.570018 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:52:28.579713 ignition[910]: INFO : Ignition 2.20.0 May 14 23:52:28.579713 ignition[910]: INFO : Stage: mount May 14 23:52:28.581376 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:28.581376 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:28.581376 ignition[910]: INFO : mount: mount passed May 14 23:52:28.581376 ignition[910]: INFO : Ignition finished successfully May 14 23:52:28.583057 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:52:28.585946 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:52:29.017793 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:52:29.019299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:52:29.036141 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) May 14 23:52:29.036175 kernel: BTRFS info (device vda6): first mount of filesystem 2550790c-7644-4e3d-a6a1-eb68bfdbcf7d May 14 23:52:29.036185 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:52:29.037249 kernel: BTRFS info (device vda6): using free space tree May 14 23:52:29.038824 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:52:29.040125 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:52:29.065552 ignition[941]: INFO : Ignition 2.20.0 May 14 23:52:29.065552 ignition[941]: INFO : Stage: files May 14 23:52:29.067008 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:29.067008 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:29.067008 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 14 23:52:29.069852 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:52:29.069852 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:52:29.072033 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:52:29.073127 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:52:29.073127 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:52:29.072558 unknown[941]: wrote ssh authorized keys file for user: core May 14 23:52:29.076075 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:52:29.076075 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:52:30.094884 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:52:30.147969 systemd-networkd[757]: eth0: Gained IPv6LL May 14 23:52:33.409025 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:52:33.409025 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:52:33.409025 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:52:33.832685 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:52:34.024643 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:52:34.026263 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 23:52:34.333106 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:52:34.866829 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:52:34.866829 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:52:34.869764 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 23:52:34.871201 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:52:34.896241 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:52:34.900913 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:52:34.902135 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:52:34.902135 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 23:52:34.902135 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:52:34.902135 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:52:34.902135 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:52:34.902135 ignition[941]: INFO : files: files passed May 14 23:52:34.902135 ignition[941]: INFO : Ignition finished successfully May 14 23:52:34.906311 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:52:34.908510 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:52:34.914942 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:52:34.924320 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:52:34.924223 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:52:34.927844 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:34.927844 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:34.924319 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:52:34.933895 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:52:34.929929 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:52:34.932255 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:52:34.933921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:52:34.966790 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:52:34.966915 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:52:34.968749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:52:34.970133 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:52:34.971642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:52:34.972951 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:52:34.996741 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:52:34.999259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:52:35.016869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:35.018787 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:35.020624 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:52:35.021479 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:52:35.021605 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:52:35.023528 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:52:35.025333 systemd[1]: Stopped target basic.target - Basic System. May 14 23:52:35.026775 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:52:35.028318 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:52:35.030040 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:52:35.031765 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:52:35.033431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:52:35.035140 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:52:35.036855 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:52:35.038155 systemd[1]: Stopped target swap.target - Swaps. May 14 23:52:35.039287 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:52:35.039458 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:52:35.041180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:35.042771 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:35.044255 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:52:35.045696 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:35.047718 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:52:35.047907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:52:35.049921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:52:35.050079 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:52:35.051670 systemd[1]: Stopped target paths.target - Path Units. May 14 23:52:35.052852 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:52:35.052995 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:35.054392 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:52:35.055626 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:52:35.057327 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:52:35.057452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:52:35.058631 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:52:35.058764 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:52:35.059907 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:52:35.060051 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:52:35.061306 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:52:35.061453 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:52:35.063647 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:52:35.072616 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:52:35.073467 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:52:35.073663 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:35.075382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:52:35.075525 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:52:35.081980 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:52:35.083710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:52:35.089086 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:52:35.091356 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:52:35.091488 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:52:35.095018 ignition[997]: INFO : Ignition 2.20.0 May 14 23:52:35.095018 ignition[997]: INFO : Stage: umount May 14 23:52:35.096961 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:52:35.096961 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:52:35.096961 ignition[997]: INFO : umount: umount passed May 14 23:52:35.096961 ignition[997]: INFO : Ignition finished successfully May 14 23:52:35.097389 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:52:35.097528 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:52:35.098629 systemd[1]: Stopped target network.target - Network. May 14 23:52:35.099893 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:52:35.099947 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:52:35.101129 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:52:35.101170 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:52:35.102466 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:52:35.102508 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:52:35.103833 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:52:35.103875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:52:35.105172 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:52:35.105211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:52:35.106535 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:52:35.108034 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:52:35.116098 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:52:35.116908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:52:35.119418 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:52:35.119667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:52:35.119715 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:35.123401 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:52:35.125545 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:52:35.125647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:52:35.128648 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:52:35.128885 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:52:35.128926 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:35.131254 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:52:35.132737 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:52:35.132797 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:52:35.134499 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:52:35.134543 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:35.137195 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:52:35.137242 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:52:35.138618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:35.140915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:52:35.151155 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:52:35.151288 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:35.153553 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:52:35.153625 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:52:35.154758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:52:35.154790 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:35.156300 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:52:35.156345 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:52:35.158579 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:52:35.158627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:52:35.160849 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:52:35.160897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:52:35.163871 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:52:35.165364 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:52:35.165416 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:35.168062 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:52:35.168104 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:35.169832 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:52:35.169874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:35.171766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:52:35.171821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:35.183513 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:52:35.183641 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:52:35.185208 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:52:35.185274 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:52:35.186798 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:52:35.188590 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:52:35.209412 systemd[1]: Switching root. May 14 23:52:35.239625 systemd-journald[235]: Journal stopped May 14 23:52:36.094034 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). May 14 23:52:36.094094 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:52:36.094106 kernel: SELinux: policy capability open_perms=1 May 14 23:52:36.094116 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:52:36.094126 kernel: SELinux: policy capability always_check_network=0 May 14 23:52:36.094135 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:52:36.094145 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:52:36.094167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:52:36.094182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:52:36.094192 kernel: audit: type=1403 audit(1747266755.450:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:52:36.094203 systemd[1]: Successfully loaded SELinux policy in 29.264ms. May 14 23:52:36.094224 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.383ms. May 14 23:52:36.094236 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:52:36.094247 systemd[1]: Detected virtualization kvm. May 14 23:52:36.094257 systemd[1]: Detected architecture arm64. May 14 23:52:36.094267 systemd[1]: Detected first boot. May 14 23:52:36.094279 systemd[1]: Initializing machine ID from VM UUID. May 14 23:52:36.094290 zram_generator::config[1044]: No configuration found. May 14 23:52:36.094301 kernel: NET: Registered PF_VSOCK protocol family May 14 23:52:36.094311 systemd[1]: Populated /etc with preset unit settings. May 14 23:52:36.094322 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:52:36.094333 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:52:36.094343 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:52:36.094354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:52:36.094364 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:52:36.094377 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:52:36.094388 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:52:36.094398 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:52:36.094409 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:52:36.094420 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:52:36.094430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:52:36.094441 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:52:36.094452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:52:36.094465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:52:36.094476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:52:36.094486 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:52:36.094503 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:52:36.094516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:52:36.094529 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:52:36.094542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:52:36.094552 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:52:36.094564 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:52:36.094576 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:52:36.094586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:52:36.094597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:52:36.094608 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:52:36.094618 systemd[1]: Reached target slices.target - Slice Units. May 14 23:52:36.094629 systemd[1]: Reached target swap.target - Swaps. May 14 23:52:36.094639 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:52:36.094650 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:52:36.094662 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:52:36.094672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:52:36.094683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:52:36.094703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:52:36.094715 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:52:36.094725 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:52:36.094736 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:52:36.094746 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:52:36.094757 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:52:36.094770 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:52:36.094780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:52:36.094791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:52:36.094801 systemd[1]: Reached target machines.target - Containers. May 14 23:52:36.094819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:52:36.094841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:36.094852 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:52:36.094864 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:52:36.094876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:36.094896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:52:36.094907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:36.094918 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:52:36.094929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:36.094939 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:52:36.094950 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:52:36.094962 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:52:36.094973 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:52:36.094985 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:52:36.094996 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:36.095007 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:52:36.095017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:52:36.095028 kernel: fuse: init (API version 7.39) May 14 23:52:36.095038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:52:36.095048 kernel: loop: module loaded May 14 23:52:36.095058 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:52:36.095068 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:52:36.095080 kernel: ACPI: bus type drm_connector registered May 14 23:52:36.095090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:52:36.095112 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:52:36.095122 systemd[1]: Stopped verity-setup.service. May 14 23:52:36.095138 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:52:36.095149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:52:36.095160 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:52:36.095171 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:52:36.095183 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:52:36.095195 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:52:36.095206 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:52:36.095217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:52:36.095230 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:52:36.095259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:52:36.095270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:36.095302 systemd-journald[1112]: Collecting audit messages is disabled. May 14 23:52:36.095325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:36.095337 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:52:36.095348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:52:36.095359 systemd-journald[1112]: Journal started May 14 23:52:36.095380 systemd-journald[1112]: Runtime Journal (/run/log/journal/26e8c877de0941fbaef790b441eedad8) is 5.9M, max 47.3M, 41.4M free. May 14 23:52:35.852219 systemd[1]: Queued start job for default target multi-user.target. May 14 23:52:35.868225 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:52:35.870380 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:52:36.097401 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:52:36.099571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:36.099772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:36.101134 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:52:36.101300 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:52:36.102417 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:36.102593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:36.103792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:52:36.105034 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:52:36.106466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:52:36.107723 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:52:36.119967 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:52:36.122232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:52:36.124017 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:52:36.124899 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:52:36.124928 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:52:36.126616 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:52:36.137336 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:52:36.139395 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:52:36.140386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:36.141859 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:52:36.143657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:52:36.144772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:52:36.148892 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:52:36.150197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:52:36.153521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:52:36.155965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:52:36.160218 systemd-journald[1112]: Time spent on flushing to /var/log/journal/26e8c877de0941fbaef790b441eedad8 is 19.257ms for 872 entries. May 14 23:52:36.160218 systemd-journald[1112]: System Journal (/var/log/journal/26e8c877de0941fbaef790b441eedad8) is 8M, max 195.6M, 187.6M free. May 14 23:52:36.236619 systemd-journald[1112]: Received client request to flush runtime journal. May 14 23:52:36.236724 kernel: loop0: detected capacity change from 0 to 126448 May 14 23:52:36.236756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:52:36.236801 kernel: loop1: detected capacity change from 0 to 189592 May 14 23:52:36.160463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:52:36.165849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:52:36.167058 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:52:36.169089 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:52:36.170528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:52:36.178367 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:52:36.180862 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:52:36.182420 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:52:36.184965 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:52:36.193042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:52:36.202283 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 14 23:52:36.202294 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 14 23:52:36.207591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:52:36.212134 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:52:36.215905 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:52:36.239379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:52:36.241220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:52:36.257994 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:52:36.260586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:52:36.277038 kernel: loop2: detected capacity change from 0 to 103832 May 14 23:52:36.282185 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 14 23:52:36.282202 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 14 23:52:36.286579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:52:36.316846 kernel: loop3: detected capacity change from 0 to 126448 May 14 23:52:36.322956 kernel: loop4: detected capacity change from 0 to 189592 May 14 23:52:36.334846 kernel: loop5: detected capacity change from 0 to 103832 May 14 23:52:36.340995 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:52:36.341397 (sd-merge)[1188]: Merged extensions into '/usr'. May 14 23:52:36.344749 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:52:36.344767 systemd[1]: Reloading... May 14 23:52:36.403899 zram_generator::config[1217]: No configuration found. May 14 23:52:36.464394 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:52:36.505853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:36.556338 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:52:36.556737 systemd[1]: Reloading finished in 211 ms. May 14 23:52:36.575121 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:52:36.576314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:52:36.594108 systemd[1]: Starting ensure-sysext.service... May 14 23:52:36.596091 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:52:36.609145 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... May 14 23:52:36.609162 systemd[1]: Reloading... May 14 23:52:36.617517 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:52:36.617740 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:52:36.619986 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:52:36.620191 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 14 23:52:36.620243 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. May 14 23:52:36.622523 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:52:36.622539 systemd-tmpfiles[1252]: Skipping /boot May 14 23:52:36.631470 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:52:36.631486 systemd-tmpfiles[1252]: Skipping /boot May 14 23:52:36.657835 zram_generator::config[1280]: No configuration found. May 14 23:52:36.746489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:52:36.797043 systemd[1]: Reloading finished in 187 ms. May 14 23:52:36.808490 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:52:36.809934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:52:36.835069 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:52:36.837539 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:52:36.849645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:52:36.852644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:52:36.860252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:52:36.864129 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:52:36.872041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:36.873314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:36.876033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:36.881448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:36.883066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:36.883194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:36.886503 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:52:36.898416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:52:36.900387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:36.900617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:36.904570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:36.904754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:36.907517 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:36.907744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:36.908759 systemd-udevd[1322]: Using default interface naming scheme 'v255'. May 14 23:52:36.923325 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:52:36.924888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:52:36.927004 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:52:36.933834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:52:36.944934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:52:36.947871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:52:36.949031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:52:36.949164 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:52:36.950383 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:52:36.952641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:52:36.955894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:52:36.957290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:52:36.959002 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:52:36.959180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:52:36.963077 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:52:36.963346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:52:36.965236 systemd[1]: Finished ensure-sysext.service. May 14 23:52:36.966372 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:52:36.968333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:52:36.968510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:52:36.969471 augenrules[1361]: No rules May 14 23:52:36.982318 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:52:36.984348 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:52:36.986260 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:52:36.996585 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:52:37.006776 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:52:37.009362 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:52:37.009438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:52:37.013162 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:52:37.014100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:52:37.016380 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:52:37.059534 systemd-resolved[1321]: Positive Trust Anchors: May 14 23:52:37.059913 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:52:37.059950 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:52:37.066771 systemd-resolved[1321]: Defaulting to hostname 'linux'. May 14 23:52:37.069060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:52:37.069973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:52:37.071859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (1376) May 14 23:52:37.118321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:52:37.119652 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:52:37.123553 systemd-networkd[1393]: lo: Link UP May 14 23:52:37.123563 systemd-networkd[1393]: lo: Gained carrier May 14 23:52:37.126358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:52:37.126530 systemd-networkd[1393]: Enumeration completed May 14 23:52:37.127709 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:52:37.128837 systemd[1]: Reached target network.target - Network. May 14 23:52:37.128882 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:37.128886 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:52:37.129508 systemd-networkd[1393]: eth0: Link UP May 14 23:52:37.129516 systemd-networkd[1393]: eth0: Gained carrier May 14 23:52:37.129529 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:52:37.131095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:52:37.135740 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:52:37.144361 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:52:37.147945 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:52:37.149141 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. May 14 23:52:37.589318 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:52:37.589375 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-05-14 23:52:37.589224 UTC. May 14 23:52:37.590255 systemd-resolved[1321]: Clock change detected. Flushing caches. May 14 23:52:37.604533 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:52:37.605895 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:52:37.619334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:52:37.630329 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:52:37.633260 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:52:37.659327 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:52:37.682860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:52:37.690801 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:52:37.692285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:52:37.693178 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:52:37.694188 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:52:37.695169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:52:37.696335 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:52:37.697278 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:52:37.698295 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:52:37.699316 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:52:37.699349 systemd[1]: Reached target paths.target - Path Units. May 14 23:52:37.700046 systemd[1]: Reached target timers.target - Timer Units. May 14 23:52:37.701916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:52:37.704300 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:52:37.707845 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:52:37.708996 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:52:37.709934 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:52:37.713073 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:52:37.714970 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:52:37.717126 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:52:37.718482 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:52:37.719493 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:52:37.720269 systemd[1]: Reached target basic.target - Basic System. May 14 23:52:37.721205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:52:37.721239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:52:37.722220 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:52:37.723988 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:52:37.727012 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:52:37.727988 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:52:37.730159 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:52:37.730937 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:52:37.733302 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:52:37.735280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:52:37.739487 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:52:37.743707 jq[1426]: false May 14 23:52:37.744901 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:52:37.748082 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:52:37.750113 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:52:37.750706 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:52:37.751311 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:52:37.757132 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:52:37.759143 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:52:37.767356 dbus-daemon[1425]: [system] SELinux support is enabled May 14 23:52:37.771455 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:52:37.774422 extend-filesystems[1427]: Found loop3 May 14 23:52:37.775748 extend-filesystems[1427]: Found loop4 May 14 23:52:37.775748 extend-filesystems[1427]: Found loop5 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda May 14 23:52:37.775748 extend-filesystems[1427]: Found vda1 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda2 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda3 May 14 23:52:37.775748 extend-filesystems[1427]: Found usr May 14 23:52:37.775748 extend-filesystems[1427]: Found vda4 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda6 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda7 May 14 23:52:37.775748 extend-filesystems[1427]: Found vda9 May 14 23:52:37.775748 extend-filesystems[1427]: Checking size of /dev/vda9 May 14 23:52:37.789734 jq[1439]: true May 14 23:52:37.777396 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:52:37.799299 extend-filesystems[1427]: Resized partition /dev/vda9 May 14 23:52:37.777599 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:52:37.803558 update_engine[1434]: I20250514 23:52:37.802659 1434 main.cc:92] Flatcar Update Engine starting May 14 23:52:37.803769 extend-filesystems[1458]: resize2fs 1.47.2 (1-Jan-2025) May 14 23:52:37.813018 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:52:37.777894 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:52:37.813135 jq[1448]: true May 14 23:52:37.813325 update_engine[1434]: I20250514 23:52:37.804589 1434 update_check_scheduler.cc:74] Next update check in 4m25s May 14 23:52:37.778068 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:52:37.783430 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:52:37.783621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:52:37.803230 (ntainerd)[1449]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:52:37.805520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:52:37.805555 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:52:37.806945 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:52:37.808228 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:52:37.812655 systemd[1]: Started update-engine.service - Update Engine. May 14 23:52:37.836891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (1364) May 14 23:52:37.836948 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:52:37.843022 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:52:37.854712 tar[1446]: linux-arm64/helm May 14 23:52:37.872047 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:52:37.872047 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:52:37.872047 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:52:37.875273 extend-filesystems[1427]: Resized filesystem in /dev/vda9 May 14 23:52:37.875704 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:52:37.879760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:52:37.883958 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:52:37.884844 systemd-logind[1432]: New seat seat0. May 14 23:52:37.886486 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:52:37.920382 bash[1480]: Updated "/home/core/.ssh/authorized_keys" May 14 23:52:37.922471 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:52:37.926780 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:52:37.929973 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:52:38.059020 containerd[1449]: time="2025-05-14T23:52:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 23:52:38.061882 containerd[1449]: time="2025-05-14T23:52:38.059861195Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 23:52:38.071115 containerd[1449]: time="2025-05-14T23:52:38.071065035Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.44µs" May 14 23:52:38.071506 containerd[1449]: time="2025-05-14T23:52:38.071281195Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 23:52:38.071506 containerd[1449]: time="2025-05-14T23:52:38.071353875Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 23:52:38.071734 containerd[1449]: time="2025-05-14T23:52:38.071711395Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 23:52:38.071910 containerd[1449]: time="2025-05-14T23:52:38.071889675Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 23:52:38.072050 containerd[1449]: time="2025-05-14T23:52:38.072033115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:52:38.072464 containerd[1449]: time="2025-05-14T23:52:38.072243675Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:52:38.072464 containerd[1449]: time="2025-05-14T23:52:38.072315515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:52:38.072925 containerd[1449]: time="2025-05-14T23:52:38.072900475Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:52:38.073054 containerd[1449]: time="2025-05-14T23:52:38.073036315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:52:38.073204 containerd[1449]: time="2025-05-14T23:52:38.073111835Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:52:38.073204 containerd[1449]: time="2025-05-14T23:52:38.073128315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 23:52:38.073539 containerd[1449]: time="2025-05-14T23:52:38.073405395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 23:52:38.073941 containerd[1449]: time="2025-05-14T23:52:38.073828035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:52:38.074083 containerd[1449]: time="2025-05-14T23:52:38.074018835Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:52:38.074141 containerd[1449]: time="2025-05-14T23:52:38.074125595Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 23:52:38.074324 containerd[1449]: time="2025-05-14T23:52:38.074229075Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 23:52:38.074927 containerd[1449]: time="2025-05-14T23:52:38.074729155Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 23:52:38.074927 containerd[1449]: time="2025-05-14T23:52:38.074879195Z" level=info msg="metadata content store policy set" policy=shared May 14 23:52:38.085285 containerd[1449]: time="2025-05-14T23:52:38.085235275Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 23:52:38.085587 containerd[1449]: time="2025-05-14T23:52:38.085479515Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 23:52:38.085753 containerd[1449]: time="2025-05-14T23:52:38.085575875Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 23:52:38.085753 containerd[1449]: time="2025-05-14T23:52:38.085713475Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 23:52:38.085967 containerd[1449]: time="2025-05-14T23:52:38.085731595Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 23:52:38.085967 containerd[1449]: time="2025-05-14T23:52:38.085912395Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 23:52:38.085967 containerd[1449]: time="2025-05-14T23:52:38.085947555Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 23:52:38.086176 containerd[1449]: time="2025-05-14T23:52:38.086102435Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 23:52:38.086176 containerd[1449]: time="2025-05-14T23:52:38.086131955Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 23:52:38.086176 containerd[1449]: time="2025-05-14T23:52:38.086145715Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 23:52:38.086176 containerd[1449]: time="2025-05-14T23:52:38.086155355Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 23:52:38.086357 containerd[1449]: time="2025-05-14T23:52:38.086293915Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086591515Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086634395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086650115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086661195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086672275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 23:52:38.086702 containerd[1449]: time="2025-05-14T23:52:38.086681915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.086692795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.086970515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.086984155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.086995995Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.087006915Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.087297355Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.087314315Z" level=info msg="Start snapshots syncer" May 14 23:52:38.087462 containerd[1449]: time="2025-05-14T23:52:38.087340235Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 23:52:38.087658 containerd[1449]: time="2025-05-14T23:52:38.087601235Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 23:52:38.087658 containerd[1449]: time="2025-05-14T23:52:38.087651035Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 23:52:38.087774 containerd[1449]: time="2025-05-14T23:52:38.087718355Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 23:52:38.087896 containerd[1449]: time="2025-05-14T23:52:38.087847835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 23:52:38.087931 containerd[1449]: time="2025-05-14T23:52:38.087900395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 23:52:38.087931 containerd[1449]: time="2025-05-14T23:52:38.087914355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 23:52:38.087931 containerd[1449]: time="2025-05-14T23:52:38.087926075Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 23:52:38.087995 containerd[1449]: time="2025-05-14T23:52:38.087939675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 23:52:38.087995 containerd[1449]: time="2025-05-14T23:52:38.087951235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 23:52:38.087995 containerd[1449]: time="2025-05-14T23:52:38.087962155Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 23:52:38.087995 containerd[1449]: time="2025-05-14T23:52:38.087987635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 23:52:38.088059 containerd[1449]: time="2025-05-14T23:52:38.088000795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 23:52:38.088059 containerd[1449]: time="2025-05-14T23:52:38.088011035Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 23:52:38.088059 containerd[1449]: time="2025-05-14T23:52:38.088046315Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088060595Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088069995Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088079555Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088088595Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088098195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 23:52:38.088111 containerd[1449]: time="2025-05-14T23:52:38.088109275Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 23:52:38.088216 containerd[1449]: time="2025-05-14T23:52:38.088186635Z" level=info msg="runtime interface created" May 14 23:52:38.088216 containerd[1449]: time="2025-05-14T23:52:38.088192795Z" level=info msg="created NRI interface" May 14 23:52:38.088216 containerd[1449]: time="2025-05-14T23:52:38.088201355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 23:52:38.088216 containerd[1449]: time="2025-05-14T23:52:38.088214395Z" level=info msg="Connect containerd service" May 14 23:52:38.088279 containerd[1449]: time="2025-05-14T23:52:38.088248835Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:52:38.089679 containerd[1449]: time="2025-05-14T23:52:38.089207715Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:52:38.194905 containerd[1449]: time="2025-05-14T23:52:38.194789195Z" level=info msg="Start subscribing containerd event" May 14 23:52:38.194905 containerd[1449]: time="2025-05-14T23:52:38.194860995Z" level=info msg="Start recovering state" May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.194962435Z" level=info msg="Start event monitor" May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.194985755Z" level=info msg="Start cni network conf syncer for default" May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.195000795Z" level=info msg="Start streaming server" May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.195009515Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.195016715Z" level=info msg="runtime interface starting up..." May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.195025915Z" level=info msg="starting plugins..." May 14 23:52:38.195131 containerd[1449]: time="2025-05-14T23:52:38.195040195Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 23:52:38.195889 containerd[1449]: time="2025-05-14T23:52:38.195313595Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:52:38.196185 containerd[1449]: time="2025-05-14T23:52:38.196155115Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:52:38.196598 containerd[1449]: time="2025-05-14T23:52:38.196489275Z" level=info msg="containerd successfully booted in 0.137901s" May 14 23:52:38.196675 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:52:38.230897 tar[1446]: linux-arm64/LICENSE May 14 23:52:38.230897 tar[1446]: linux-arm64/README.md May 14 23:52:38.249051 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:52:38.350597 sshd_keygen[1443]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:52:38.368968 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:52:38.373209 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:52:38.391406 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:52:38.391649 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:52:38.394380 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:52:38.423764 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:52:38.426384 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:52:38.428319 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:52:38.429403 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:52:38.941386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:52:38.943475 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:41086.service - OpenSSH per-connection server daemon (10.0.0.1:41086). May 14 23:52:39.015078 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 41086 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:39.016782 sshd-session[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:39.028642 systemd-logind[1432]: New session 1 of user core. May 14 23:52:39.029631 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:52:39.031996 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:52:39.056921 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:52:39.060396 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:52:39.081079 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:52:39.083358 systemd-logind[1432]: New session c1 of user core. May 14 23:52:39.185398 systemd[1535]: Queued start job for default target default.target. May 14 23:52:39.196838 systemd[1535]: Created slice app.slice - User Application Slice. May 14 23:52:39.196888 systemd[1535]: Reached target paths.target - Paths. May 14 23:52:39.196927 systemd[1535]: Reached target timers.target - Timers. May 14 23:52:39.198183 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:52:39.207644 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:52:39.207712 systemd[1535]: Reached target sockets.target - Sockets. May 14 23:52:39.207760 systemd[1535]: Reached target basic.target - Basic System. May 14 23:52:39.207791 systemd[1535]: Reached target default.target - Main User Target. May 14 23:52:39.207819 systemd[1535]: Startup finished in 118ms. May 14 23:52:39.207968 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:52:39.218065 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:52:39.278304 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:41094.service - OpenSSH per-connection server daemon (10.0.0.1:41094). May 14 23:52:39.327250 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 41094 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:39.328494 sshd-session[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:39.332743 systemd-logind[1432]: New session 2 of user core. May 14 23:52:39.339004 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:52:39.352983 systemd-networkd[1393]: eth0: Gained IPv6LL May 14 23:52:39.359402 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:52:39.360776 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:52:39.362949 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:52:39.365124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:39.366920 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:52:39.390812 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:52:39.391004 sshd[1548]: Connection closed by 10.0.0.1 port 41094 May 14 23:52:39.391367 sshd-session[1546]: pam_unix(sshd:session): session closed for user core May 14 23:52:39.391748 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:52:39.397380 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:41094.service: Deactivated successfully. May 14 23:52:39.398741 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:52:39.400032 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:52:39.400649 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. May 14 23:52:39.401941 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:52:39.403207 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:41106.service - OpenSSH per-connection server daemon (10.0.0.1:41106). May 14 23:52:39.404952 systemd-logind[1432]: Removed session 2. May 14 23:52:39.452405 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 41106 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:39.453171 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:39.457527 systemd-logind[1432]: New session 3 of user core. May 14 23:52:39.466018 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:52:39.517450 sshd[1575]: Connection closed by 10.0.0.1 port 41106 May 14 23:52:39.517780 sshd-session[1571]: pam_unix(sshd:session): session closed for user core May 14 23:52:39.522391 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:41106.service: Deactivated successfully. May 14 23:52:39.524029 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:52:39.524617 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. May 14 23:52:39.525473 systemd-logind[1432]: Removed session 3. May 14 23:52:39.921620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:39.923651 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:52:39.925984 systemd[1]: Startup finished in 569ms (kernel) + 9.726s (initrd) + 4.067s (userspace) = 14.364s. May 14 23:52:39.934230 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:40.405573 kubelet[1585]: E0514 23:52:40.405454 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:40.408097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:40.408237 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:40.408603 systemd[1]: kubelet.service: Consumed 812ms CPU time, 236M memory peak. May 14 23:52:49.528664 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:58406.service - OpenSSH per-connection server daemon (10.0.0.1:58406). May 14 23:52:49.571205 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 58406 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:49.572329 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:49.576663 systemd-logind[1432]: New session 4 of user core. May 14 23:52:49.588011 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:52:49.639625 sshd[1600]: Connection closed by 10.0.0.1 port 58406 May 14 23:52:49.640164 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 14 23:52:49.654113 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:58406.service: Deactivated successfully. May 14 23:52:49.655438 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:52:49.659227 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. May 14 23:52:49.661762 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). May 14 23:52:49.663134 systemd-logind[1432]: Removed session 4. May 14 23:52:49.705038 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:49.706203 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:49.712906 systemd-logind[1432]: New session 5 of user core. May 14 23:52:49.721024 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:52:49.768991 sshd[1608]: Connection closed by 10.0.0.1 port 58416 May 14 23:52:49.769399 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 14 23:52:49.781038 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:58416.service: Deactivated successfully. May 14 23:52:49.782272 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:52:49.789174 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. May 14 23:52:49.789407 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). May 14 23:52:49.790719 systemd-logind[1432]: Removed session 5. May 14 23:52:49.838277 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:49.839321 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:49.844564 systemd-logind[1432]: New session 6 of user core. May 14 23:52:49.853081 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:52:49.903385 sshd[1616]: Connection closed by 10.0.0.1 port 58422 May 14 23:52:49.903760 sshd-session[1613]: pam_unix(sshd:session): session closed for user core May 14 23:52:49.920214 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:58424.service - OpenSSH per-connection server daemon (10.0.0.1:58424). May 14 23:52:49.920587 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:58422.service: Deactivated successfully. May 14 23:52:49.921937 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:52:49.924692 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. May 14 23:52:49.926570 systemd-logind[1432]: Removed session 6. May 14 23:52:49.957472 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 58424 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:49.958478 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:49.964300 systemd-logind[1432]: New session 7 of user core. May 14 23:52:49.974020 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:52:50.038086 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:52:50.040493 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:50.060750 sudo[1625]: pam_unix(sudo:session): session closed for user root May 14 23:52:50.062062 sshd[1624]: Connection closed by 10.0.0.1 port 58424 May 14 23:52:50.062607 sshd-session[1619]: pam_unix(sshd:session): session closed for user core May 14 23:52:50.073965 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). May 14 23:52:50.074344 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:58424.service: Deactivated successfully. May 14 23:52:50.075677 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:52:50.077486 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. May 14 23:52:50.078788 systemd-logind[1432]: Removed session 7. May 14 23:52:50.115977 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:50.117192 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:50.121378 systemd-logind[1432]: New session 8 of user core. May 14 23:52:50.130070 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:52:50.179934 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:52:50.180200 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:50.182906 sudo[1635]: pam_unix(sudo:session): session closed for user root May 14 23:52:50.187091 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:52:50.187327 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:50.195792 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:52:50.228429 augenrules[1657]: No rules May 14 23:52:50.229376 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:52:50.229567 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:52:50.231034 sudo[1634]: pam_unix(sudo:session): session closed for user root May 14 23:52:50.232467 sshd[1633]: Connection closed by 10.0.0.1 port 58428 May 14 23:52:50.232357 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 14 23:52:50.244818 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:58428.service: Deactivated successfully. May 14 23:52:50.246062 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:52:50.246641 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. May 14 23:52:50.249063 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). May 14 23:52:50.251255 systemd-logind[1432]: Removed session 8. May 14 23:52:50.290109 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:52:50.291816 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:52:50.295983 systemd-logind[1432]: New session 9 of user core. May 14 23:52:50.314054 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:52:50.364500 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:52:50.364789 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:52:50.658930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:52:50.660545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:52:50.705684 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:52:50.720206 (dockerd)[1692]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:52:50.782502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:52:50.786349 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:52:50.821466 kubelet[1698]: E0514 23:52:50.821404 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:52:50.824644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:52:50.824798 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:52:50.825275 systemd[1]: kubelet.service: Consumed 133ms CPU time, 96.9M memory peak. May 14 23:52:50.983951 dockerd[1692]: time="2025-05-14T23:52:50.983815355Z" level=info msg="Starting up" May 14 23:52:50.985700 dockerd[1692]: time="2025-05-14T23:52:50.985663875Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 23:52:51.085195 dockerd[1692]: time="2025-05-14T23:52:51.085158915Z" level=info msg="Loading containers: start." May 14 23:52:51.239902 kernel: Initializing XFRM netlink socket May 14 23:52:51.295849 systemd-networkd[1393]: docker0: Link UP May 14 23:52:51.358011 dockerd[1692]: time="2025-05-14T23:52:51.357969395Z" level=info msg="Loading containers: done." May 14 23:52:51.369503 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2834156050-merged.mount: Deactivated successfully. May 14 23:52:51.373442 dockerd[1692]: time="2025-05-14T23:52:51.373385915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:52:51.373563 dockerd[1692]: time="2025-05-14T23:52:51.373480715Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 23:52:51.373673 dockerd[1692]: time="2025-05-14T23:52:51.373647155Z" level=info msg="Daemon has completed initialization" May 14 23:52:51.401009 dockerd[1692]: time="2025-05-14T23:52:51.400769755Z" level=info msg="API listen on /run/docker.sock" May 14 23:52:51.400964 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:52:52.075762 containerd[1449]: time="2025-05-14T23:52:52.075722955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:52:52.672011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940294246.mount: Deactivated successfully. May 14 23:52:53.511653 containerd[1449]: time="2025-05-14T23:52:53.511601235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:53.512593 containerd[1449]: time="2025-05-14T23:52:53.512357595Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 14 23:52:53.513709 containerd[1449]: time="2025-05-14T23:52:53.513666475Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:53.516206 containerd[1449]: time="2025-05-14T23:52:53.516155395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:53.517343 containerd[1449]: time="2025-05-14T23:52:53.517193235Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.44142832s" May 14 23:52:53.517343 containerd[1449]: time="2025-05-14T23:52:53.517231115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 23:52:53.517906 containerd[1449]: time="2025-05-14T23:52:53.517880555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:52:54.658073 containerd[1449]: time="2025-05-14T23:52:54.658028155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:54.659073 containerd[1449]: time="2025-05-14T23:52:54.658442155Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 14 23:52:54.660155 containerd[1449]: time="2025-05-14T23:52:54.660121235Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:54.662496 containerd[1449]: time="2025-05-14T23:52:54.662451515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:54.663340 containerd[1449]: time="2025-05-14T23:52:54.663314635Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.14540216s" May 14 23:52:54.663389 containerd[1449]: time="2025-05-14T23:52:54.663355795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 23:52:54.663832 containerd[1449]: time="2025-05-14T23:52:54.663813555Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:52:55.751300 containerd[1449]: time="2025-05-14T23:52:55.751253155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:55.752248 containerd[1449]: time="2025-05-14T23:52:55.752203515Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 14 23:52:55.752851 containerd[1449]: time="2025-05-14T23:52:55.752770555Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:55.757080 containerd[1449]: time="2025-05-14T23:52:55.756961515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.09311768s" May 14 23:52:55.757080 containerd[1449]: time="2025-05-14T23:52:55.756999195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 23:52:55.757416 containerd[1449]: time="2025-05-14T23:52:55.757393035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:52:55.758117 containerd[1449]: time="2025-05-14T23:52:55.758049275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:56.748592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729821843.mount: Deactivated successfully. May 14 23:52:57.078273 containerd[1449]: time="2025-05-14T23:52:57.078115635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:57.078775 containerd[1449]: time="2025-05-14T23:52:57.078710875Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 14 23:52:57.079342 containerd[1449]: time="2025-05-14T23:52:57.079314155Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:57.081169 containerd[1449]: time="2025-05-14T23:52:57.081141635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:57.081973 containerd[1449]: time="2025-05-14T23:52:57.081620395Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.32419588s" May 14 23:52:57.081973 containerd[1449]: time="2025-05-14T23:52:57.081650315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 23:52:57.082176 containerd[1449]: time="2025-05-14T23:52:57.082077955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:52:57.654456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865771673.mount: Deactivated successfully. May 14 23:52:58.258482 containerd[1449]: time="2025-05-14T23:52:58.258433395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:58.259078 containerd[1449]: time="2025-05-14T23:52:58.259022155Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 23:52:58.259796 containerd[1449]: time="2025-05-14T23:52:58.259766635Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:58.264190 containerd[1449]: time="2025-05-14T23:52:58.262707195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:52:58.264190 containerd[1449]: time="2025-05-14T23:52:58.263728635Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.18162456s" May 14 23:52:58.264190 containerd[1449]: time="2025-05-14T23:52:58.263757115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:52:58.264551 containerd[1449]: time="2025-05-14T23:52:58.264430155Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:52:58.837062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769190079.mount: Deactivated successfully. May 14 23:52:58.841931 containerd[1449]: time="2025-05-14T23:52:58.841886715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:58.842396 containerd[1449]: time="2025-05-14T23:52:58.842344915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 23:52:58.843197 containerd[1449]: time="2025-05-14T23:52:58.843162875Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:58.844956 containerd[1449]: time="2025-05-14T23:52:58.844920235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:52:58.845559 containerd[1449]: time="2025-05-14T23:52:58.845525715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.9852ms" May 14 23:52:58.845605 containerd[1449]: time="2025-05-14T23:52:58.845557195Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:52:58.846342 containerd[1449]: time="2025-05-14T23:52:58.846323115Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 23:52:59.330220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125659361.mount: Deactivated successfully. May 14 23:53:01.075184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:53:01.079064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:01.197204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:01.200834 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:01.243105 kubelet[2096]: E0514 23:53:01.242909 2096 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:01.246395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:01.246543 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:01.246830 systemd[1]: kubelet.service: Consumed 134ms CPU time, 96.5M memory peak. May 14 23:53:01.410108 containerd[1449]: time="2025-05-14T23:53:01.409986835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:01.411362 containerd[1449]: time="2025-05-14T23:53:01.411091315Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 14 23:53:01.412890 containerd[1449]: time="2025-05-14T23:53:01.412077195Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:01.414849 containerd[1449]: time="2025-05-14T23:53:01.414813955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:01.416257 containerd[1449]: time="2025-05-14T23:53:01.416223555Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.56987212s" May 14 23:53:01.416326 containerd[1449]: time="2025-05-14T23:53:01.416259155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 23:53:06.229197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:06.229337 systemd[1]: kubelet.service: Consumed 134ms CPU time, 96.5M memory peak. May 14 23:53:06.232068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:06.277958 systemd[1]: Reload requested from client PID 2137 ('systemctl') (unit session-9.scope)... May 14 23:53:06.277981 systemd[1]: Reloading... May 14 23:53:06.357892 zram_generator::config[2189]: No configuration found. May 14 23:53:06.591985 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:06.666759 systemd[1]: Reloading finished in 388 ms. May 14 23:53:06.721964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:06.724709 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:06.726235 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:53:06.727969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:06.728022 systemd[1]: kubelet.service: Consumed 93ms CPU time, 82.4M memory peak. May 14 23:53:06.729705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:06.834978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:06.838819 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:53:06.873160 kubelet[2228]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:06.873160 kubelet[2228]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:53:06.873160 kubelet[2228]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:06.873596 kubelet[2228]: I0514 23:53:06.873546 2228 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:53:07.730209 kubelet[2228]: I0514 23:53:07.730160 2228 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:53:07.730209 kubelet[2228]: I0514 23:53:07.730194 2228 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:53:07.730504 kubelet[2228]: I0514 23:53:07.730476 2228 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:53:07.762259 kubelet[2228]: E0514 23:53:07.762217 2228 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:07.763478 kubelet[2228]: I0514 23:53:07.763455 2228 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:53:07.776401 kubelet[2228]: I0514 23:53:07.776337 2228 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 23:53:07.779960 kubelet[2228]: I0514 23:53:07.779937 2228 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:53:07.780259 kubelet[2228]: I0514 23:53:07.780246 2228 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:53:07.780388 kubelet[2228]: I0514 23:53:07.780362 2228 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:53:07.780551 kubelet[2228]: I0514 23:53:07.780389 2228 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:53:07.780695 kubelet[2228]: I0514 23:53:07.780684 2228 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:53:07.780695 kubelet[2228]: I0514 23:53:07.780695 2228 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:53:07.780825 kubelet[2228]: I0514 23:53:07.780813 2228 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:07.782534 kubelet[2228]: I0514 23:53:07.782502 2228 kubelet.go:408] "Attempting to sync node with API server" May 14 23:53:07.782534 kubelet[2228]: I0514 23:53:07.782534 2228 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:53:07.782654 kubelet[2228]: I0514 23:53:07.782626 2228 kubelet.go:314] "Adding apiserver pod source" May 14 23:53:07.782654 kubelet[2228]: I0514 23:53:07.782638 2228 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:53:07.783589 kubelet[2228]: W0514 23:53:07.783465 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 14 23:53:07.783589 kubelet[2228]: E0514 23:53:07.783532 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:07.784726 kubelet[2228]: W0514 23:53:07.784636 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 14 23:53:07.784726 kubelet[2228]: E0514 23:53:07.784699 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:07.785352 kubelet[2228]: I0514 23:53:07.785137 2228 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:53:07.787030 kubelet[2228]: I0514 23:53:07.787009 2228 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:53:07.791412 kubelet[2228]: W0514 23:53:07.791388 2228 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:53:07.792445 kubelet[2228]: I0514 23:53:07.792421 2228 server.go:1269] "Started kubelet" May 14 23:53:07.794843 kubelet[2228]: I0514 23:53:07.794787 2228 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:53:07.796183 kubelet[2228]: I0514 23:53:07.796159 2228 server.go:460] "Adding debug handlers to kubelet server" May 14 23:53:07.797036 kubelet[2228]: I0514 23:53:07.796448 2228 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:53:07.797036 kubelet[2228]: I0514 23:53:07.796710 2228 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:53:07.797036 kubelet[2228]: I0514 23:53:07.796901 2228 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:53:07.797182 kubelet[2228]: I0514 23:53:07.797108 2228 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:53:07.800378 kubelet[2228]: I0514 23:53:07.800345 2228 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:53:07.800452 kubelet[2228]: I0514 23:53:07.800439 2228 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:53:07.800538 kubelet[2228]: I0514 23:53:07.800519 2228 reconciler.go:26] "Reconciler: start to sync state" May 14 23:53:07.800877 kubelet[2228]: W0514 23:53:07.800814 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 14 23:53:07.800937 kubelet[2228]: E0514 23:53:07.800880 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:07.802002 kubelet[2228]: E0514 23:53:07.801944 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:53:07.802148 kubelet[2228]: I0514 23:53:07.802127 2228 factory.go:221] Registration of the systemd container factory successfully May 14 23:53:07.802761 kubelet[2228]: I0514 23:53:07.802206 2228 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:53:07.802761 kubelet[2228]: E0514 23:53:07.802530 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" May 14 23:53:07.803373 kubelet[2228]: E0514 23:53:07.799376 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89e2d63427eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:53:07.792398315 +0000 UTC m=+0.950818041,LastTimestamp:2025-05-14 23:53:07.792398315 +0000 UTC m=+0.950818041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:53:07.804557 kubelet[2228]: E0514 23:53:07.803958 2228 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:53:07.804557 kubelet[2228]: I0514 23:53:07.804252 2228 factory.go:221] Registration of the containerd container factory successfully May 14 23:53:07.815276 kubelet[2228]: I0514 23:53:07.815236 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:53:07.816618 kubelet[2228]: I0514 23:53:07.816596 2228 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:53:07.816736 kubelet[2228]: I0514 23:53:07.816725 2228 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:53:07.816967 kubelet[2228]: I0514 23:53:07.816953 2228 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:53:07.817106 kubelet[2228]: E0514 23:53:07.817086 2228 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:53:07.817838 kubelet[2228]: W0514 23:53:07.817791 2228 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 14 23:53:07.817981 kubelet[2228]: E0514 23:53:07.817960 2228 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 14 23:53:07.821097 kubelet[2228]: I0514 23:53:07.821077 2228 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:53:07.821222 kubelet[2228]: I0514 23:53:07.821211 2228 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:53:07.821289 kubelet[2228]: I0514 23:53:07.821279 2228 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:07.885844 kubelet[2228]: I0514 23:53:07.885811 2228 policy_none.go:49] "None policy: Start" May 14 23:53:07.887115 kubelet[2228]: I0514 23:53:07.887094 2228 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:53:07.887295 kubelet[2228]: I0514 23:53:07.887281 2228 state_mem.go:35] "Initializing new in-memory state store" May 14 23:53:07.893605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:53:07.901999 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:53:07.902205 kubelet[2228]: E0514 23:53:07.902079 2228 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:53:07.905682 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:53:07.914764 kubelet[2228]: I0514 23:53:07.914675 2228 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:53:07.915164 kubelet[2228]: I0514 23:53:07.914888 2228 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:53:07.915164 kubelet[2228]: I0514 23:53:07.914901 2228 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:53:07.915469 kubelet[2228]: I0514 23:53:07.915437 2228 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:53:07.916415 kubelet[2228]: E0514 23:53:07.916357 2228 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:53:07.924574 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 23:53:07.936856 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 23:53:07.951222 systemd[1]: Created slice kubepods-burstable-pode6a66c9d75fb8930e86f12ec9266d9a1.slice - libcontainer container kubepods-burstable-pode6a66c9d75fb8930e86f12ec9266d9a1.slice. May 14 23:53:08.001490 kubelet[2228]: I0514 23:53:08.001237 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:08.001490 kubelet[2228]: I0514 23:53:08.001277 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:08.001490 kubelet[2228]: I0514 23:53:08.001298 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:08.001490 kubelet[2228]: I0514 23:53:08.001319 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:08.001490 kubelet[2228]: I0514 23:53:08.001333 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:08.001707 kubelet[2228]: I0514 23:53:08.001349 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:08.001707 kubelet[2228]: I0514 23:53:08.001364 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:08.001707 kubelet[2228]: I0514 23:53:08.001377 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:08.001707 kubelet[2228]: I0514 23:53:08.001399 2228 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 23:53:08.003909 kubelet[2228]: E0514 23:53:08.003842 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" May 14 23:53:08.017075 kubelet[2228]: I0514 23:53:08.017035 2228 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 23:53:08.017504 kubelet[2228]: E0514 23:53:08.017479 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 14 23:53:08.219249 kubelet[2228]: I0514 23:53:08.219191 2228 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 23:53:08.219570 kubelet[2228]: E0514 23:53:08.219534 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 14 23:53:08.235886 kubelet[2228]: E0514 23:53:08.235845 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.236493 containerd[1449]: time="2025-05-14T23:53:08.236413075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 23:53:08.249967 kubelet[2228]: E0514 23:53:08.249753 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.250244 containerd[1449]: time="2025-05-14T23:53:08.250193635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 23:53:08.253836 kubelet[2228]: E0514 23:53:08.253756 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.254515 containerd[1449]: time="2025-05-14T23:53:08.254352355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6a66c9d75fb8930e86f12ec9266d9a1,Namespace:kube-system,Attempt:0,}" May 14 23:53:08.256313 containerd[1449]: time="2025-05-14T23:53:08.256281635Z" level=info msg="connecting to shim b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268" address="unix:///run/containerd/s/d60c2da3cf06a833e8e37aa4a34db378d65667a3389ec72d8519a765e9561f75" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:08.266103 kubelet[2228]: E0514 23:53:08.265962 2228 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89e2d63427eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:53:07.792398315 +0000 UTC m=+0.950818041,LastTimestamp:2025-05-14 23:53:07.792398315 +0000 UTC m=+0.950818041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:53:08.280817 containerd[1449]: time="2025-05-14T23:53:08.280615675Z" level=info msg="connecting to shim 58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6" address="unix:///run/containerd/s/7cc2fe0d13d4c8384f747843d023478a9a916665be1b1e5880b538c53af8faaf" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:08.285440 systemd[1]: Started cri-containerd-b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268.scope - libcontainer container b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268. May 14 23:53:08.290587 containerd[1449]: time="2025-05-14T23:53:08.290509635Z" level=info msg="connecting to shim 5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab" address="unix:///run/containerd/s/ec2714522546657ee4dbfa6832ac3a9b7e47de0081d2b48d40459f1a1f70b402" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:08.306031 systemd[1]: Started cri-containerd-58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6.scope - libcontainer container 58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6. May 14 23:53:08.308932 systemd[1]: Started cri-containerd-5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab.scope - libcontainer container 5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab. May 14 23:53:08.335246 containerd[1449]: time="2025-05-14T23:53:08.335106795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268\"" May 14 23:53:08.336700 kubelet[2228]: E0514 23:53:08.336671 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.342347 containerd[1449]: time="2025-05-14T23:53:08.342299755Z" level=info msg="CreateContainer within sandbox \"b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:53:08.347947 containerd[1449]: time="2025-05-14T23:53:08.347800755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6\"" May 14 23:53:08.348679 kubelet[2228]: E0514 23:53:08.348652 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.349999 containerd[1449]: time="2025-05-14T23:53:08.349953435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6a66c9d75fb8930e86f12ec9266d9a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab\"" May 14 23:53:08.350794 containerd[1449]: time="2025-05-14T23:53:08.350489715Z" level=info msg="CreateContainer within sandbox \"58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:53:08.350859 kubelet[2228]: E0514 23:53:08.350748 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.352132 containerd[1449]: time="2025-05-14T23:53:08.352106195Z" level=info msg="CreateContainer within sandbox \"5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:53:08.354830 containerd[1449]: time="2025-05-14T23:53:08.354796115Z" level=info msg="Container 97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:08.363065 containerd[1449]: time="2025-05-14T23:53:08.363028075Z" level=info msg="Container cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:08.364829 containerd[1449]: time="2025-05-14T23:53:08.364796035Z" level=info msg="Container 6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:08.365725 containerd[1449]: time="2025-05-14T23:53:08.365670515Z" level=info msg="CreateContainer within sandbox \"b892d97a171a24eea2b2d4b4c787dbee93d190fd4e0c2a0648ac217d06732268\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651\"" May 14 23:53:08.366251 containerd[1449]: time="2025-05-14T23:53:08.366192115Z" level=info msg="StartContainer for \"97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651\"" May 14 23:53:08.367403 containerd[1449]: time="2025-05-14T23:53:08.367363875Z" level=info msg="connecting to shim 97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651" address="unix:///run/containerd/s/d60c2da3cf06a833e8e37aa4a34db378d65667a3389ec72d8519a765e9561f75" protocol=ttrpc version=3 May 14 23:53:08.370625 containerd[1449]: time="2025-05-14T23:53:08.370563195Z" level=info msg="CreateContainer within sandbox \"58be8bed73257a404e5fb48d3815138f66a37f4b58733185bdfe8dc9b03fcfb6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145\"" May 14 23:53:08.372855 containerd[1449]: time="2025-05-14T23:53:08.371498555Z" level=info msg="StartContainer for \"cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145\"" May 14 23:53:08.372855 containerd[1449]: time="2025-05-14T23:53:08.372216955Z" level=info msg="CreateContainer within sandbox \"5156a30e0e9f166f916aba6a7b7671943c4e8405647c04a322218c6fc66d22ab\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe\"" May 14 23:53:08.372855 containerd[1449]: time="2025-05-14T23:53:08.372449275Z" level=info msg="connecting to shim cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145" address="unix:///run/containerd/s/7cc2fe0d13d4c8384f747843d023478a9a916665be1b1e5880b538c53af8faaf" protocol=ttrpc version=3 May 14 23:53:08.372855 containerd[1449]: time="2025-05-14T23:53:08.372823275Z" level=info msg="StartContainer for \"6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe\"" May 14 23:53:08.373758 containerd[1449]: time="2025-05-14T23:53:08.373731115Z" level=info msg="connecting to shim 6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe" address="unix:///run/containerd/s/ec2714522546657ee4dbfa6832ac3a9b7e47de0081d2b48d40459f1a1f70b402" protocol=ttrpc version=3 May 14 23:53:08.394025 systemd[1]: Started cri-containerd-6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe.scope - libcontainer container 6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe. May 14 23:53:08.395283 systemd[1]: Started cri-containerd-97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651.scope - libcontainer container 97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651. May 14 23:53:08.398790 systemd[1]: Started cri-containerd-cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145.scope - libcontainer container cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145. May 14 23:53:08.405709 kubelet[2228]: E0514 23:53:08.405619 2228 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" May 14 23:53:08.456408 containerd[1449]: time="2025-05-14T23:53:08.456367955Z" level=info msg="StartContainer for \"97a798784b9a34b7b1dcdd58eeafb9120834faf86a7cf0b18efbc652d07d5651\" returns successfully" May 14 23:53:08.456633 containerd[1449]: time="2025-05-14T23:53:08.456503915Z" level=info msg="StartContainer for \"6eb3d4683004222dfa4814e2c6ebd37fbcbbd797ac9698bae8e772c24b5b2bfe\" returns successfully" May 14 23:53:08.504228 containerd[1449]: time="2025-05-14T23:53:08.501804155Z" level=info msg="StartContainer for \"cf0a5fe1137066aa0b96858ab961c6fe44348031c57c43b5c248c304aaa72145\" returns successfully" May 14 23:53:08.622079 kubelet[2228]: I0514 23:53:08.621723 2228 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 23:53:08.622079 kubelet[2228]: E0514 23:53:08.622045 2228 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 14 23:53:08.829073 kubelet[2228]: E0514 23:53:08.827942 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.831721 kubelet[2228]: E0514 23:53:08.831697 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:08.831919 kubelet[2228]: E0514 23:53:08.831858 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:09.423802 kubelet[2228]: I0514 23:53:09.423745 2228 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 23:53:09.833882 kubelet[2228]: E0514 23:53:09.833635 2228 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:10.708749 kubelet[2228]: E0514 23:53:10.708702 2228 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 23:53:10.784705 kubelet[2228]: I0514 23:53:10.784664 2228 apiserver.go:52] "Watching apiserver" May 14 23:53:10.800925 kubelet[2228]: I0514 23:53:10.800892 2228 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:53:10.854361 kubelet[2228]: I0514 23:53:10.854330 2228 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 23:53:10.857875 kubelet[2228]: E0514 23:53:10.854486 2228 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 23:53:12.829377 systemd[1]: Reload requested from client PID 2505 ('systemctl') (unit session-9.scope)... May 14 23:53:12.829395 systemd[1]: Reloading... May 14 23:53:12.901913 zram_generator::config[2549]: No configuration found. May 14 23:53:12.983833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:13.068641 systemd[1]: Reloading finished in 238 ms. May 14 23:53:13.089710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:13.103706 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:53:13.105904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:13.105961 systemd[1]: kubelet.service: Consumed 1.337s CPU time, 117M memory peak. May 14 23:53:13.107567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:13.213937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:13.217951 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:53:13.255430 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:13.255430 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:53:13.255430 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:13.255805 kubelet[2591]: I0514 23:53:13.255475 2591 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:53:13.266961 kubelet[2591]: I0514 23:53:13.266239 2591 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:53:13.266961 kubelet[2591]: I0514 23:53:13.266269 2591 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:53:13.266961 kubelet[2591]: I0514 23:53:13.266653 2591 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:53:13.268512 kubelet[2591]: I0514 23:53:13.268492 2591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:53:13.270644 kubelet[2591]: I0514 23:53:13.270620 2591 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:53:13.273949 kubelet[2591]: I0514 23:53:13.273923 2591 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 23:53:13.276127 kubelet[2591]: I0514 23:53:13.276093 2591 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:53:13.276254 kubelet[2591]: I0514 23:53:13.276228 2591 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:53:13.276372 kubelet[2591]: I0514 23:53:13.276333 2591 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:53:13.276533 kubelet[2591]: I0514 23:53:13.276365 2591 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:53:13.276616 kubelet[2591]: I0514 23:53:13.276539 2591 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:53:13.276616 kubelet[2591]: I0514 23:53:13.276548 2591 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:53:13.276616 kubelet[2591]: I0514 23:53:13.276577 2591 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:13.276692 kubelet[2591]: I0514 23:53:13.276677 2591 kubelet.go:408] "Attempting to sync node with API server" May 14 23:53:13.276719 kubelet[2591]: I0514 23:53:13.276699 2591 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:53:13.276739 kubelet[2591]: I0514 23:53:13.276720 2591 kubelet.go:314] "Adding apiserver pod source" May 14 23:53:13.276739 kubelet[2591]: I0514 23:53:13.276730 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:53:13.278493 kubelet[2591]: I0514 23:53:13.277254 2591 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:53:13.278493 kubelet[2591]: I0514 23:53:13.277988 2591 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:53:13.278493 kubelet[2591]: I0514 23:53:13.278368 2591 server.go:1269] "Started kubelet" May 14 23:53:13.283979 kubelet[2591]: I0514 23:53:13.283930 2591 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:53:13.286659 kubelet[2591]: I0514 23:53:13.286197 2591 server.go:460] "Adding debug handlers to kubelet server" May 14 23:53:13.286884 kubelet[2591]: E0514 23:53:13.286843 2591 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:53:13.291066 kubelet[2591]: I0514 23:53:13.291014 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:53:13.291247 kubelet[2591]: I0514 23:53:13.291227 2591 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:53:13.292715 kubelet[2591]: I0514 23:53:13.292698 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:53:13.292892 kubelet[2591]: I0514 23:53:13.292841 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:53:13.294517 kubelet[2591]: I0514 23:53:13.294495 2591 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:53:13.295001 kubelet[2591]: I0514 23:53:13.294973 2591 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:53:13.295209 kubelet[2591]: I0514 23:53:13.295194 2591 reconciler.go:26] "Reconciler: start to sync state" May 14 23:53:13.295498 kubelet[2591]: E0514 23:53:13.295476 2591 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:53:13.295683 kubelet[2591]: I0514 23:53:13.295656 2591 factory.go:221] Registration of the systemd container factory successfully May 14 23:53:13.295813 kubelet[2591]: I0514 23:53:13.295759 2591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:53:13.298286 kubelet[2591]: I0514 23:53:13.298262 2591 factory.go:221] Registration of the containerd container factory successfully May 14 23:53:13.308964 kubelet[2591]: I0514 23:53:13.308658 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:53:13.309670 kubelet[2591]: I0514 23:53:13.309651 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:53:13.309764 kubelet[2591]: I0514 23:53:13.309754 2591 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:53:13.309829 kubelet[2591]: I0514 23:53:13.309820 2591 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:53:13.309955 kubelet[2591]: E0514 23:53:13.309931 2591 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:53:13.336209 kubelet[2591]: I0514 23:53:13.336171 2591 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:53:13.336347 kubelet[2591]: I0514 23:53:13.336334 2591 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:53:13.336630 kubelet[2591]: I0514 23:53:13.336421 2591 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:13.336630 kubelet[2591]: I0514 23:53:13.336563 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:53:13.336630 kubelet[2591]: I0514 23:53:13.336574 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:53:13.336630 kubelet[2591]: I0514 23:53:13.336593 2591 policy_none.go:49] "None policy: Start" May 14 23:53:13.337310 kubelet[2591]: I0514 23:53:13.337290 2591 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:53:13.337375 kubelet[2591]: I0514 23:53:13.337317 2591 state_mem.go:35] "Initializing new in-memory state store" May 14 23:53:13.337502 kubelet[2591]: I0514 23:53:13.337487 2591 state_mem.go:75] "Updated machine memory state" May 14 23:53:13.342219 kubelet[2591]: I0514 23:53:13.341581 2591 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:53:13.342219 kubelet[2591]: I0514 23:53:13.341726 2591 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:53:13.342219 kubelet[2591]: I0514 23:53:13.341736 2591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:53:13.342219 kubelet[2591]: I0514 23:53:13.342092 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:53:13.445306 kubelet[2591]: I0514 23:53:13.445280 2591 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 23:53:13.451353 kubelet[2591]: I0514 23:53:13.451277 2591 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 23:53:13.451612 kubelet[2591]: I0514 23:53:13.451571 2591 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 23:53:13.597023 kubelet[2591]: I0514 23:53:13.596896 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:13.597023 kubelet[2591]: I0514 23:53:13.596942 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:13.597023 kubelet[2591]: I0514 23:53:13.596966 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:13.597023 kubelet[2591]: I0514 23:53:13.596986 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:13.597023 kubelet[2591]: I0514 23:53:13.597003 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:13.597230 kubelet[2591]: I0514 23:53:13.597019 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6a66c9d75fb8930e86f12ec9266d9a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6a66c9d75fb8930e86f12ec9266d9a1\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:13.597230 kubelet[2591]: I0514 23:53:13.597034 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:13.597230 kubelet[2591]: I0514 23:53:13.597079 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:13.597230 kubelet[2591]: I0514 23:53:13.597102 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 23:53:13.722459 kubelet[2591]: E0514 23:53:13.722415 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:13.722459 kubelet[2591]: E0514 23:53:13.722433 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:13.723857 kubelet[2591]: E0514 23:53:13.723568 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:13.833702 sudo[2627]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:53:13.834090 sudo[2627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:53:14.260150 sudo[2627]: pam_unix(sudo:session): session closed for user root May 14 23:53:14.277463 kubelet[2591]: I0514 23:53:14.277426 2591 apiserver.go:52] "Watching apiserver" May 14 23:53:14.297303 kubelet[2591]: I0514 23:53:14.296057 2591 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:53:14.321293 kubelet[2591]: E0514 23:53:14.321250 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:14.328208 kubelet[2591]: E0514 23:53:14.328127 2591 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:53:14.328337 kubelet[2591]: E0514 23:53:14.328252 2591 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:53:14.328371 kubelet[2591]: E0514 23:53:14.328356 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:14.328469 kubelet[2591]: E0514 23:53:14.328443 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:14.350359 kubelet[2591]: I0514 23:53:14.348603 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.348508684 podStartE2EDuration="1.348508684s" podCreationTimestamp="2025-05-14 23:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:14.348247178 +0000 UTC m=+1.127027192" watchObservedRunningTime="2025-05-14 23:53:14.348508684 +0000 UTC m=+1.127288658" May 14 23:53:14.362680 kubelet[2591]: I0514 23:53:14.362624 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3626085620000001 podStartE2EDuration="1.362608562s" podCreationTimestamp="2025-05-14 23:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:14.355999818 +0000 UTC m=+1.134779872" watchObservedRunningTime="2025-05-14 23:53:14.362608562 +0000 UTC m=+1.141388536" May 14 23:53:14.370589 kubelet[2591]: I0514 23:53:14.370527 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.370511513 podStartE2EDuration="1.370511513s" podCreationTimestamp="2025-05-14 23:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:14.362565645 +0000 UTC m=+1.141345579" watchObservedRunningTime="2025-05-14 23:53:14.370511513 +0000 UTC m=+1.149291487" May 14 23:53:15.322951 kubelet[2591]: E0514 23:53:15.322913 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:15.322951 kubelet[2591]: E0514 23:53:15.322927 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:16.181856 sudo[1669]: pam_unix(sudo:session): session closed for user root May 14 23:53:16.182986 sshd[1668]: Connection closed by 10.0.0.1 port 58430 May 14 23:53:16.183458 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 14 23:53:16.187113 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:58430.service: Deactivated successfully. May 14 23:53:16.188970 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:53:16.189145 systemd[1]: session-9.scope: Consumed 7.205s CPU time, 262.2M memory peak. May 14 23:53:16.190041 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. May 14 23:53:16.190884 systemd-logind[1432]: Removed session 9. May 14 23:53:16.933299 kubelet[2591]: E0514 23:53:16.933270 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:18.165244 kubelet[2591]: I0514 23:53:18.165194 2591 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:53:18.165600 containerd[1449]: time="2025-05-14T23:53:18.165497119Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:53:18.165774 kubelet[2591]: I0514 23:53:18.165664 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:53:18.807356 systemd[1]: Created slice kubepods-besteffort-pod9b082e97_c9cc_44f6_a306_d28fe4f15e90.slice - libcontainer container kubepods-besteffort-pod9b082e97_c9cc_44f6_a306_d28fe4f15e90.slice. May 14 23:53:18.825363 systemd[1]: Created slice kubepods-burstable-pode2151e29_4440_47e8_b48b_314528425e07.slice - libcontainer container kubepods-burstable-pode2151e29_4440_47e8_b48b_314528425e07.slice. May 14 23:53:18.925909 kubelet[2591]: I0514 23:53:18.925789 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-net\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.925909 kubelet[2591]: I0514 23:53:18.925839 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b082e97-c9cc-44f6-a306-d28fe4f15e90-kube-proxy\") pod \"kube-proxy-cms46\" (UID: \"9b082e97-c9cc-44f6-a306-d28fe4f15e90\") " pod="kube-system/kube-proxy-cms46" May 14 23:53:18.925909 kubelet[2591]: I0514 23:53:18.925856 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-run\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.925909 kubelet[2591]: I0514 23:53:18.925919 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-cgroup\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.925990 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-hostproc\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.926027 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2151e29-4440-47e8-b48b-314528425e07-clustermesh-secrets\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.926043 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-hubble-tls\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.926059 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24l2\" (UniqueName: \"kubernetes.io/projected/9b082e97-c9cc-44f6-a306-d28fe4f15e90-kube-api-access-r24l2\") pod \"kube-proxy-cms46\" (UID: \"9b082e97-c9cc-44f6-a306-d28fe4f15e90\") " pod="kube-system/kube-proxy-cms46" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.926087 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-etc-cni-netd\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926149 kubelet[2591]: I0514 23:53:18.926113 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2151e29-4440-47e8-b48b-314528425e07-cilium-config-path\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926140 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-kernel\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926158 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b082e97-c9cc-44f6-a306-d28fe4f15e90-xtables-lock\") pod \"kube-proxy-cms46\" (UID: \"9b082e97-c9cc-44f6-a306-d28fe4f15e90\") " pod="kube-system/kube-proxy-cms46" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926181 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-bpf-maps\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926196 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cni-path\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926210 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gns44\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-kube-api-access-gns44\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926281 kubelet[2591]: I0514 23:53:18.926224 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b082e97-c9cc-44f6-a306-d28fe4f15e90-lib-modules\") pod \"kube-proxy-cms46\" (UID: \"9b082e97-c9cc-44f6-a306-d28fe4f15e90\") " pod="kube-system/kube-proxy-cms46" May 14 23:53:18.926405 kubelet[2591]: I0514 23:53:18.926247 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-lib-modules\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:18.926405 kubelet[2591]: I0514 23:53:18.926261 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-xtables-lock\") pod \"cilium-99mbp\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " pod="kube-system/cilium-99mbp" May 14 23:53:19.118066 kubelet[2591]: E0514 23:53:19.117972 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.119755 containerd[1449]: time="2025-05-14T23:53:19.119705155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cms46,Uid:9b082e97-c9cc-44f6-a306-d28fe4f15e90,Namespace:kube-system,Attempt:0,}" May 14 23:53:19.127572 systemd[1]: Created slice kubepods-besteffort-podb5c2799d_929b_42f6_8c54_16764acabe65.slice - libcontainer container kubepods-besteffort-podb5c2799d_929b_42f6_8c54_16764acabe65.slice. May 14 23:53:19.128515 kubelet[2591]: I0514 23:53:19.128482 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnkf\" (UniqueName: \"kubernetes.io/projected/b5c2799d-929b-42f6-8c54-16764acabe65-kube-api-access-jfnkf\") pod \"cilium-operator-5d85765b45-wxhj4\" (UID: \"b5c2799d-929b-42f6-8c54-16764acabe65\") " pod="kube-system/cilium-operator-5d85765b45-wxhj4" May 14 23:53:19.128597 kubelet[2591]: I0514 23:53:19.128522 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c2799d-929b-42f6-8c54-16764acabe65-cilium-config-path\") pod \"cilium-operator-5d85765b45-wxhj4\" (UID: \"b5c2799d-929b-42f6-8c54-16764acabe65\") " pod="kube-system/cilium-operator-5d85765b45-wxhj4" May 14 23:53:19.128743 kubelet[2591]: E0514 23:53:19.128710 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.129541 containerd[1449]: time="2025-05-14T23:53:19.129393756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99mbp,Uid:e2151e29-4440-47e8-b48b-314528425e07,Namespace:kube-system,Attempt:0,}" May 14 23:53:19.163761 containerd[1449]: time="2025-05-14T23:53:19.163723583Z" level=info msg="connecting to shim 4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:19.171861 containerd[1449]: time="2025-05-14T23:53:19.171546541Z" level=info msg="connecting to shim cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f" address="unix:///run/containerd/s/5cd752bf90c2f8923b10fb0a8255a4c3b5d346321e299649bf88dabb3cc89996" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:19.190031 systemd[1]: Started cri-containerd-4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e.scope - libcontainer container 4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e. May 14 23:53:19.193682 systemd[1]: Started cri-containerd-cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f.scope - libcontainer container cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f. May 14 23:53:19.218336 containerd[1449]: time="2025-05-14T23:53:19.218299057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99mbp,Uid:e2151e29-4440-47e8-b48b-314528425e07,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\"" May 14 23:53:19.220590 kubelet[2591]: E0514 23:53:19.219444 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.221354 containerd[1449]: time="2025-05-14T23:53:19.221331652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cms46,Uid:9b082e97-c9cc-44f6-a306-d28fe4f15e90,Namespace:kube-system,Attempt:0,} returns sandbox id \"cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f\"" May 14 23:53:19.221951 containerd[1449]: time="2025-05-14T23:53:19.221690597Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:53:19.222061 kubelet[2591]: E0514 23:53:19.221821 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.223370 containerd[1449]: time="2025-05-14T23:53:19.223326410Z" level=info msg="CreateContainer within sandbox \"cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:53:19.230757 containerd[1449]: time="2025-05-14T23:53:19.230728305Z" level=info msg="Container f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:19.237397 containerd[1449]: time="2025-05-14T23:53:19.237355593Z" level=info msg="CreateContainer within sandbox \"cddb1b7cc89a5276c6c5ee0477e8a988d9beb5f18c4758890b99e0cd9fa7649f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee\"" May 14 23:53:19.238970 containerd[1449]: time="2025-05-14T23:53:19.238348912Z" level=info msg="StartContainer for \"f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee\"" May 14 23:53:19.239882 containerd[1449]: time="2025-05-14T23:53:19.239840570Z" level=info msg="connecting to shim f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee" address="unix:///run/containerd/s/5cd752bf90c2f8923b10fb0a8255a4c3b5d346321e299649bf88dabb3cc89996" protocol=ttrpc version=3 May 14 23:53:19.259040 systemd[1]: Started cri-containerd-f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee.scope - libcontainer container f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee. May 14 23:53:19.290953 containerd[1449]: time="2025-05-14T23:53:19.290916308Z" level=info msg="StartContainer for \"f0f2bfec87c93b032710949b90f4cffbdb75e16314f4190b3cb14a05e1b979ee\" returns successfully" May 14 23:53:19.330218 kubelet[2591]: E0514 23:53:19.329760 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.340162 kubelet[2591]: I0514 23:53:19.340113 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cms46" podStartSLOduration=1.3400992440000001 podStartE2EDuration="1.340099244s" podCreationTimestamp="2025-05-14 23:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:19.339934171 +0000 UTC m=+6.118714145" watchObservedRunningTime="2025-05-14 23:53:19.340099244 +0000 UTC m=+6.118879218" May 14 23:53:19.430431 kubelet[2591]: E0514 23:53:19.430280 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:19.431847 containerd[1449]: time="2025-05-14T23:53:19.431604158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wxhj4,Uid:b5c2799d-929b-42f6-8c54-16764acabe65,Namespace:kube-system,Attempt:0,}" May 14 23:53:19.449018 containerd[1449]: time="2025-05-14T23:53:19.448982443Z" level=info msg="connecting to shim f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555" address="unix:///run/containerd/s/a04abdbad51820a4d41ecf4d6ab0afc4b7365ca5d1fd87c830e7b170b855f04b" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:19.473078 systemd[1]: Started cri-containerd-f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555.scope - libcontainer container f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555. May 14 23:53:19.511547 containerd[1449]: time="2025-05-14T23:53:19.511509349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wxhj4,Uid:b5c2799d-929b-42f6-8c54-16764acabe65,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\"" May 14 23:53:19.512341 kubelet[2591]: E0514 23:53:19.512319 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:20.505626 kubelet[2591]: E0514 23:53:20.505588 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:21.335473 kubelet[2591]: E0514 23:53:21.335425 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:22.564666 update_engine[1434]: I20250514 23:53:22.564534 1434 update_attempter.cc:509] Updating boot flags... May 14 23:53:22.610893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2971) May 14 23:53:22.664917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2974) May 14 23:53:22.707931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 47 scanned by (udev-worker) (2974) May 14 23:53:23.896731 kubelet[2591]: E0514 23:53:23.896688 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:24.080845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806337953.mount: Deactivated successfully. May 14 23:53:26.943860 kubelet[2591]: E0514 23:53:26.943283 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:27.728720 containerd[1449]: time="2025-05-14T23:53:27.728642043Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:27.729226 containerd[1449]: time="2025-05-14T23:53:27.729180230Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:53:27.730434 containerd[1449]: time="2025-05-14T23:53:27.729874293Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:27.731673 containerd[1449]: time="2025-05-14T23:53:27.731567892Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.509800097s" May 14 23:53:27.731673 containerd[1449]: time="2025-05-14T23:53:27.731605891Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:53:27.739449 containerd[1449]: time="2025-05-14T23:53:27.739414859Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:53:27.746102 containerd[1449]: time="2025-05-14T23:53:27.746054736Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:53:27.768789 containerd[1449]: time="2025-05-14T23:53:27.768721859Z" level=info msg="Container 9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:27.781401 containerd[1449]: time="2025-05-14T23:53:27.781361869Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\"" May 14 23:53:27.786127 containerd[1449]: time="2025-05-14T23:53:27.786078353Z" level=info msg="StartContainer for \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\"" May 14 23:53:27.787707 containerd[1449]: time="2025-05-14T23:53:27.787665034Z" level=info msg="connecting to shim 9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" protocol=ttrpc version=3 May 14 23:53:27.822044 systemd[1]: Started cri-containerd-9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2.scope - libcontainer container 9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2. May 14 23:53:27.852529 containerd[1449]: time="2025-05-14T23:53:27.852467042Z" level=info msg="StartContainer for \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" returns successfully" May 14 23:53:27.919208 systemd[1]: cri-containerd-9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2.scope: Deactivated successfully. May 14 23:53:27.939498 containerd[1449]: time="2025-05-14T23:53:27.939444386Z" level=info msg="received exit event container_id:\"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" id:\"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" pid:3020 exited_at:{seconds:1747266807 nanos:926586142}" May 14 23:53:27.939627 containerd[1449]: time="2025-05-14T23:53:27.939544384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" id:\"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" pid:3020 exited_at:{seconds:1747266807 nanos:926586142}" May 14 23:53:27.971154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2-rootfs.mount: Deactivated successfully. May 14 23:53:28.356037 kubelet[2591]: E0514 23:53:28.355998 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:28.359855 containerd[1449]: time="2025-05-14T23:53:28.359806854Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:53:28.369068 containerd[1449]: time="2025-05-14T23:53:28.369025882Z" level=info msg="Container 349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:28.374202 containerd[1449]: time="2025-05-14T23:53:28.374164683Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\"" May 14 23:53:28.375724 containerd[1449]: time="2025-05-14T23:53:28.375629090Z" level=info msg="StartContainer for \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\"" May 14 23:53:28.376727 containerd[1449]: time="2025-05-14T23:53:28.376654666Z" level=info msg="connecting to shim 349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" protocol=ttrpc version=3 May 14 23:53:28.400035 systemd[1]: Started cri-containerd-349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe.scope - libcontainer container 349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe. May 14 23:53:28.426038 containerd[1449]: time="2025-05-14T23:53:28.426002210Z" level=info msg="StartContainer for \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" returns successfully" May 14 23:53:28.455720 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:53:28.455967 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:28.456686 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:28.458754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:28.458971 systemd[1]: cri-containerd-349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe.scope: Deactivated successfully. May 14 23:53:28.460575 containerd[1449]: time="2025-05-14T23:53:28.460514695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" id:\"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" pid:3064 exited_at:{seconds:1747266808 nanos:460159983}" May 14 23:53:28.460683 containerd[1449]: time="2025-05-14T23:53:28.460584534Z" level=info msg="received exit event container_id:\"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" id:\"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" pid:3064 exited_at:{seconds:1747266808 nanos:460159983}" May 14 23:53:28.500987 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:28.765445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293445862.mount: Deactivated successfully. May 14 23:53:29.110432 containerd[1449]: time="2025-05-14T23:53:29.110312012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:29.111572 containerd[1449]: time="2025-05-14T23:53:29.110737003Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:53:29.111572 containerd[1449]: time="2025-05-14T23:53:29.111523226Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:29.113326 containerd[1449]: time="2025-05-14T23:53:29.112912316Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.373457618s" May 14 23:53:29.113326 containerd[1449]: time="2025-05-14T23:53:29.112950595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:53:29.114892 containerd[1449]: time="2025-05-14T23:53:29.114847794Z" level=info msg="CreateContainer within sandbox \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:53:29.120830 containerd[1449]: time="2025-05-14T23:53:29.120782906Z" level=info msg="Container 50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:29.128594 containerd[1449]: time="2025-05-14T23:53:29.128550418Z" level=info msg="CreateContainer within sandbox \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\"" May 14 23:53:29.130422 containerd[1449]: time="2025-05-14T23:53:29.130388299Z" level=info msg="StartContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\"" May 14 23:53:29.131295 containerd[1449]: time="2025-05-14T23:53:29.131267040Z" level=info msg="connecting to shim 50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc" address="unix:///run/containerd/s/a04abdbad51820a4d41ecf4d6ab0afc4b7365ca5d1fd87c830e7b170b855f04b" protocol=ttrpc version=3 May 14 23:53:29.155075 systemd[1]: Started cri-containerd-50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc.scope - libcontainer container 50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc. May 14 23:53:29.186009 containerd[1449]: time="2025-05-14T23:53:29.185952499Z" level=info msg="StartContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" returns successfully" May 14 23:53:29.367027 kubelet[2591]: E0514 23:53:29.366653 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:29.370994 containerd[1449]: time="2025-05-14T23:53:29.370149563Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:53:29.371299 kubelet[2591]: E0514 23:53:29.371172 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:29.390908 containerd[1449]: time="2025-05-14T23:53:29.389765980Z" level=info msg="Container f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:29.416033 containerd[1449]: time="2025-05-14T23:53:29.415986334Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\"" May 14 23:53:29.416850 containerd[1449]: time="2025-05-14T23:53:29.416801876Z" level=info msg="StartContainer for \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\"" May 14 23:53:29.418488 containerd[1449]: time="2025-05-14T23:53:29.418462521Z" level=info msg="connecting to shim f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" protocol=ttrpc version=3 May 14 23:53:29.456063 systemd[1]: Started cri-containerd-f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94.scope - libcontainer container f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94. May 14 23:53:29.512893 containerd[1449]: time="2025-05-14T23:53:29.512120379Z" level=info msg="StartContainer for \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" returns successfully" May 14 23:53:29.514959 systemd[1]: cri-containerd-f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94.scope: Deactivated successfully. May 14 23:53:29.516474 containerd[1449]: time="2025-05-14T23:53:29.515952936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" id:\"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" pid:3161 exited_at:{seconds:1747266809 nanos:515607184}" May 14 23:53:29.516731 containerd[1449]: time="2025-05-14T23:53:29.516066894Z" level=info msg="received exit event container_id:\"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" id:\"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" pid:3161 exited_at:{seconds:1747266809 nanos:515607184}" May 14 23:53:30.376594 kubelet[2591]: E0514 23:53:30.376563 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:30.377077 kubelet[2591]: E0514 23:53:30.376608 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:30.379541 containerd[1449]: time="2025-05-14T23:53:30.379505327Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:53:30.390348 containerd[1449]: time="2025-05-14T23:53:30.388166272Z" level=info msg="Container e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:30.405622 kubelet[2591]: I0514 23:53:30.404988 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wxhj4" podStartSLOduration=1.804208525 podStartE2EDuration="11.404969732s" podCreationTimestamp="2025-05-14 23:53:19 +0000 UTC" firstStartedPulling="2025-05-14 23:53:19.512883973 +0000 UTC m=+6.291663907" lastFinishedPulling="2025-05-14 23:53:29.11364514 +0000 UTC m=+15.892425114" observedRunningTime="2025-05-14 23:53:29.422007124 +0000 UTC m=+16.200787138" watchObservedRunningTime="2025-05-14 23:53:30.404969732 +0000 UTC m=+17.183749706" May 14 23:53:30.407124 containerd[1449]: time="2025-05-14T23:53:30.407088529Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\"" May 14 23:53:30.407658 containerd[1449]: time="2025-05-14T23:53:30.407634438Z" level=info msg="StartContainer for \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\"" May 14 23:53:30.408899 containerd[1449]: time="2025-05-14T23:53:30.408724016Z" level=info msg="connecting to shim e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" protocol=ttrpc version=3 May 14 23:53:30.432037 systemd[1]: Started cri-containerd-e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8.scope - libcontainer container e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8. May 14 23:53:30.453529 systemd[1]: cri-containerd-e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8.scope: Deactivated successfully. May 14 23:53:30.454427 containerd[1449]: time="2025-05-14T23:53:30.454386532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" id:\"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" pid:3200 exited_at:{seconds:1747266810 nanos:454146377}" May 14 23:53:30.459853 containerd[1449]: time="2025-05-14T23:53:30.459592067Z" level=info msg="received exit event container_id:\"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" id:\"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" pid:3200 exited_at:{seconds:1747266810 nanos:454146377}" May 14 23:53:30.461327 containerd[1449]: time="2025-05-14T23:53:30.461294752Z" level=info msg="StartContainer for \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" returns successfully" May 14 23:53:30.476829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8-rootfs.mount: Deactivated successfully. May 14 23:53:31.387443 kubelet[2591]: E0514 23:53:31.387168 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:31.391316 containerd[1449]: time="2025-05-14T23:53:31.391269707Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:53:31.422239 containerd[1449]: time="2025-05-14T23:53:31.422195240Z" level=info msg="Container 96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:31.430297 containerd[1449]: time="2025-05-14T23:53:31.430253768Z" level=info msg="CreateContainer within sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\"" May 14 23:53:31.430921 containerd[1449]: time="2025-05-14T23:53:31.430899755Z" level=info msg="StartContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\"" May 14 23:53:31.431751 containerd[1449]: time="2025-05-14T23:53:31.431728100Z" level=info msg="connecting to shim 96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913" address="unix:///run/containerd/s/217073cb03b32e67fd6ea360ef9d8a2cab3c9d98347d4a5cd222721d8a18a946" protocol=ttrpc version=3 May 14 23:53:31.453033 systemd[1]: Started cri-containerd-96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913.scope - libcontainer container 96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913. May 14 23:53:31.497002 containerd[1449]: time="2025-05-14T23:53:31.496957182Z" level=info msg="StartContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" returns successfully" May 14 23:53:31.619584 containerd[1449]: time="2025-05-14T23:53:31.619545337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" id:\"2c3ee2fd466b1a583f1d137efec267bf5a0a7fc2bd55d0b9e63ed465e2f3a791\" pid:3268 exited_at:{seconds:1747266811 nanos:619033226}" May 14 23:53:31.625202 kubelet[2591]: I0514 23:53:31.625155 2591 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 23:53:31.659116 systemd[1]: Created slice kubepods-burstable-podf9a7626b_5595_4687_aba5_f45326326fef.slice - libcontainer container kubepods-burstable-podf9a7626b_5595_4687_aba5_f45326326fef.slice. May 14 23:53:31.670361 systemd[1]: Created slice kubepods-burstable-podca908994_9b06_4693_9c8f_19c2adb5511b.slice - libcontainer container kubepods-burstable-podca908994_9b06_4693_9c8f_19c2adb5511b.slice. May 14 23:53:31.805112 kubelet[2591]: I0514 23:53:31.804973 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca908994-9b06-4693-9c8f-19c2adb5511b-config-volume\") pod \"coredns-6f6b679f8f-lvrnt\" (UID: \"ca908994-9b06-4693-9c8f-19c2adb5511b\") " pod="kube-system/coredns-6f6b679f8f-lvrnt" May 14 23:53:31.805112 kubelet[2591]: I0514 23:53:31.805015 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gnk\" (UniqueName: \"kubernetes.io/projected/ca908994-9b06-4693-9c8f-19c2adb5511b-kube-api-access-h4gnk\") pod \"coredns-6f6b679f8f-lvrnt\" (UID: \"ca908994-9b06-4693-9c8f-19c2adb5511b\") " pod="kube-system/coredns-6f6b679f8f-lvrnt" May 14 23:53:31.805112 kubelet[2591]: I0514 23:53:31.805039 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9a7626b-5595-4687-aba5-f45326326fef-config-volume\") pod \"coredns-6f6b679f8f-tjv67\" (UID: \"f9a7626b-5595-4687-aba5-f45326326fef\") " pod="kube-system/coredns-6f6b679f8f-tjv67" May 14 23:53:31.805112 kubelet[2591]: I0514 23:53:31.805056 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cx4\" (UniqueName: \"kubernetes.io/projected/f9a7626b-5595-4687-aba5-f45326326fef-kube-api-access-z2cx4\") pod \"coredns-6f6b679f8f-tjv67\" (UID: \"f9a7626b-5595-4687-aba5-f45326326fef\") " pod="kube-system/coredns-6f6b679f8f-tjv67" May 14 23:53:31.966676 kubelet[2591]: E0514 23:53:31.966506 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:31.968060 containerd[1449]: time="2025-05-14T23:53:31.968004646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjv67,Uid:f9a7626b-5595-4687-aba5-f45326326fef,Namespace:kube-system,Attempt:0,}" May 14 23:53:31.973473 kubelet[2591]: E0514 23:53:31.973157 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:31.974528 containerd[1449]: time="2025-05-14T23:53:31.974464963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvrnt,Uid:ca908994-9b06-4693-9c8f-19c2adb5511b,Namespace:kube-system,Attempt:0,}" May 14 23:53:32.392916 kubelet[2591]: E0514 23:53:32.392880 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:32.412614 kubelet[2591]: I0514 23:53:32.412550 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-99mbp" podStartSLOduration=5.89468913 podStartE2EDuration="14.412533341s" podCreationTimestamp="2025-05-14 23:53:18 +0000 UTC" firstStartedPulling="2025-05-14 23:53:19.221231856 +0000 UTC m=+6.000011830" lastFinishedPulling="2025-05-14 23:53:27.739076067 +0000 UTC m=+14.517856041" observedRunningTime="2025-05-14 23:53:32.411728795 +0000 UTC m=+19.190508769" watchObservedRunningTime="2025-05-14 23:53:32.412533341 +0000 UTC m=+19.191313315" May 14 23:53:33.393744 kubelet[2591]: E0514 23:53:33.393666 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:33.743073 systemd-networkd[1393]: cilium_host: Link UP May 14 23:53:33.743192 systemd-networkd[1393]: cilium_net: Link UP May 14 23:53:33.744891 systemd-networkd[1393]: cilium_net: Gained carrier May 14 23:53:33.745435 systemd-networkd[1393]: cilium_host: Gained carrier May 14 23:53:33.745965 systemd-networkd[1393]: cilium_net: Gained IPv6LL May 14 23:53:33.746126 systemd-networkd[1393]: cilium_host: Gained IPv6LL May 14 23:53:33.827616 systemd-networkd[1393]: cilium_vxlan: Link UP May 14 23:53:33.827803 systemd-networkd[1393]: cilium_vxlan: Gained carrier May 14 23:53:34.147900 kernel: NET: Registered PF_ALG protocol family May 14 23:53:34.397178 kubelet[2591]: E0514 23:53:34.396786 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:34.747502 systemd-networkd[1393]: lxc_health: Link UP May 14 23:53:34.762030 systemd-networkd[1393]: lxc_health: Gained carrier May 14 23:53:35.125670 systemd-networkd[1393]: lxca6e155091c76: Link UP May 14 23:53:35.137950 kernel: eth0: renamed from tmpbd4b6 May 14 23:53:35.155173 kernel: eth0: renamed from tmp09932 May 14 23:53:35.160571 systemd-networkd[1393]: lxc248e7bdb82e5: Link UP May 14 23:53:35.163293 systemd-networkd[1393]: lxca6e155091c76: Gained carrier May 14 23:53:35.163499 systemd-networkd[1393]: lxc248e7bdb82e5: Gained carrier May 14 23:53:35.289035 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL May 14 23:53:35.398402 kubelet[2591]: E0514 23:53:35.398289 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:36.124040 systemd-networkd[1393]: lxc_health: Gained IPv6LL May 14 23:53:36.761028 systemd-networkd[1393]: lxca6e155091c76: Gained IPv6LL May 14 23:53:36.761293 systemd-networkd[1393]: lxc248e7bdb82e5: Gained IPv6LL May 14 23:53:38.880016 containerd[1449]: time="2025-05-14T23:53:38.879966203Z" level=info msg="connecting to shim bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5" address="unix:///run/containerd/s/981e0d9158178a00a72e5024c2d905f4a64e792ae5273a416420f782ddb100b8" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:38.880494 containerd[1449]: time="2025-05-14T23:53:38.880370598Z" level=info msg="connecting to shim 099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714" address="unix:///run/containerd/s/48684c0c342332067d3a3185c3b1f07bf4f1ceb50c2d77379cee58180a7aca48" namespace=k8s.io protocol=ttrpc version=3 May 14 23:53:38.910017 systemd[1]: Started cri-containerd-099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714.scope - libcontainer container 099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714. May 14 23:53:38.911206 systemd[1]: Started cri-containerd-bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5.scope - libcontainer container bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5. May 14 23:53:38.928810 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:53:38.949981 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:53:38.950682 containerd[1449]: time="2025-05-14T23:53:38.950625910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lvrnt,Uid:ca908994-9b06-4693-9c8f-19c2adb5511b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5\"" May 14 23:53:38.951452 kubelet[2591]: E0514 23:53:38.951432 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:38.954598 containerd[1449]: time="2025-05-14T23:53:38.954568622Z" level=info msg="CreateContainer within sandbox \"bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:53:38.968465 containerd[1449]: time="2025-05-14T23:53:38.968431095Z" level=info msg="Container 4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:38.974027 containerd[1449]: time="2025-05-14T23:53:38.973986587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tjv67,Uid:f9a7626b-5595-4687-aba5-f45326326fef,Namespace:kube-system,Attempt:0,} returns sandbox id \"099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714\"" May 14 23:53:38.974897 kubelet[2591]: E0514 23:53:38.974521 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:38.975417 containerd[1449]: time="2025-05-14T23:53:38.975377611Z" level=info msg="CreateContainer within sandbox \"bd4b63388c03f65b0c9e2a425bd31cf33a6a2787c69cc0e64f1d839fd52bd2c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86\"" May 14 23:53:38.975872 containerd[1449]: time="2025-05-14T23:53:38.975844485Z" level=info msg="StartContainer for \"4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86\"" May 14 23:53:38.979599 containerd[1449]: time="2025-05-14T23:53:38.978183697Z" level=info msg="CreateContainer within sandbox \"099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:53:38.979599 containerd[1449]: time="2025-05-14T23:53:38.978543212Z" level=info msg="connecting to shim 4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86" address="unix:///run/containerd/s/981e0d9158178a00a72e5024c2d905f4a64e792ae5273a416420f782ddb100b8" protocol=ttrpc version=3 May 14 23:53:38.988769 containerd[1449]: time="2025-05-14T23:53:38.988733129Z" level=info msg="Container 6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409: CDI devices from CRI Config.CDIDevices: []" May 14 23:53:38.999404 containerd[1449]: time="2025-05-14T23:53:38.999361521Z" level=info msg="CreateContainer within sandbox \"099322ce3b01425ab67df1294cabea75d84aa9a7fc76a943c84fdc7f80bcf714\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409\"" May 14 23:53:39.000744 containerd[1449]: time="2025-05-14T23:53:38.999744556Z" level=info msg="StartContainer for \"6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409\"" May 14 23:53:39.000744 containerd[1449]: time="2025-05-14T23:53:39.000475748Z" level=info msg="connecting to shim 6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409" address="unix:///run/containerd/s/48684c0c342332067d3a3185c3b1f07bf4f1ceb50c2d77379cee58180a7aca48" protocol=ttrpc version=3 May 14 23:53:39.002043 systemd[1]: Started cri-containerd-4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86.scope - libcontainer container 4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86. May 14 23:53:39.022037 systemd[1]: Started cri-containerd-6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409.scope - libcontainer container 6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409. May 14 23:53:39.056852 containerd[1449]: time="2025-05-14T23:53:39.054499776Z" level=info msg="StartContainer for \"4ffac67936e0584457fd3294de0a7a3ccdc5b69bfd9e2db7a4b24b7e83107f86\" returns successfully" May 14 23:53:39.060203 containerd[1449]: time="2025-05-14T23:53:39.060172112Z" level=info msg="StartContainer for \"6c64dcadfd1d7ede14b1a19336c9a2dfc835821f07ff725f41cc415fea866409\" returns successfully" May 14 23:53:39.406416 kubelet[2591]: E0514 23:53:39.406367 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:39.424916 kubelet[2591]: E0514 23:53:39.422515 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:39.447899 kubelet[2591]: I0514 23:53:39.447589 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tjv67" podStartSLOduration=20.447572326 podStartE2EDuration="20.447572326s" podCreationTimestamp="2025-05-14 23:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:39.422262293 +0000 UTC m=+26.201042267" watchObservedRunningTime="2025-05-14 23:53:39.447572326 +0000 UTC m=+26.226352260" May 14 23:53:39.856170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450949338.mount: Deactivated successfully. May 14 23:53:40.171349 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:52268.service - OpenSSH per-connection server daemon (10.0.0.1:52268). May 14 23:53:40.238300 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 52268 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:40.241015 sshd-session[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:40.244917 systemd-logind[1432]: New session 10 of user core. May 14 23:53:40.255026 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:53:40.396030 sshd[3929]: Connection closed by 10.0.0.1 port 52268 May 14 23:53:40.396370 sshd-session[3927]: pam_unix(sshd:session): session closed for user core May 14 23:53:40.399386 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:53:40.400274 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:52268.service: Deactivated successfully. May 14 23:53:40.406061 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. May 14 23:53:40.407239 systemd-logind[1432]: Removed session 10. May 14 23:53:40.412161 kubelet[2591]: E0514 23:53:40.412141 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:40.412483 kubelet[2591]: E0514 23:53:40.412194 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:41.417892 kubelet[2591]: E0514 23:53:41.417775 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:41.417892 kubelet[2591]: E0514 23:53:41.417833 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:42.938803 kubelet[2591]: I0514 23:53:42.938443 2591 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:53:42.938803 kubelet[2591]: E0514 23:53:42.938810 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:42.955533 kubelet[2591]: I0514 23:53:42.955461 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lvrnt" podStartSLOduration=23.955442675 podStartE2EDuration="23.955442675s" podCreationTimestamp="2025-05-14 23:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:53:39.461564128 +0000 UTC m=+26.240344142" watchObservedRunningTime="2025-05-14 23:53:42.955442675 +0000 UTC m=+29.734222729" May 14 23:53:43.421535 kubelet[2591]: E0514 23:53:43.421507 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:53:45.408693 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). May 14 23:53:45.471775 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:45.473054 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:45.477343 systemd-logind[1432]: New session 11 of user core. May 14 23:53:45.490040 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:53:45.610892 sshd[3946]: Connection closed by 10.0.0.1 port 58102 May 14 23:53:45.611180 sshd-session[3944]: pam_unix(sshd:session): session closed for user core May 14 23:53:45.614659 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:58102.service: Deactivated successfully. May 14 23:53:45.616459 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:53:45.617176 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. May 14 23:53:45.618017 systemd-logind[1432]: Removed session 11. May 14 23:53:50.625106 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:58118.service - OpenSSH per-connection server daemon (10.0.0.1:58118). May 14 23:53:50.668763 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 58118 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:50.670227 sshd-session[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:50.674623 systemd-logind[1432]: New session 12 of user core. May 14 23:53:50.684241 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:53:50.790919 sshd[3967]: Connection closed by 10.0.0.1 port 58118 May 14 23:53:50.791251 sshd-session[3965]: pam_unix(sshd:session): session closed for user core May 14 23:53:50.794454 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:58118.service: Deactivated successfully. May 14 23:53:50.796250 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:53:50.796919 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. May 14 23:53:50.797724 systemd-logind[1432]: Removed session 12. May 14 23:53:55.809126 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:48272.service - OpenSSH per-connection server daemon (10.0.0.1:48272). May 14 23:53:55.856487 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 48272 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:55.857641 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:55.861208 systemd-logind[1432]: New session 13 of user core. May 14 23:53:55.875022 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:53:55.991971 sshd[3984]: Connection closed by 10.0.0.1 port 48272 May 14 23:53:55.992529 sshd-session[3982]: pam_unix(sshd:session): session closed for user core May 14 23:53:56.010899 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:48272.service: Deactivated successfully. May 14 23:53:56.012664 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:53:56.013483 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. May 14 23:53:56.016108 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:48286.service - OpenSSH per-connection server daemon (10.0.0.1:48286). May 14 23:53:56.017397 systemd-logind[1432]: Removed session 13. May 14 23:53:56.071724 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 48286 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:56.073106 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:56.076933 systemd-logind[1432]: New session 14 of user core. May 14 23:53:56.082026 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:53:56.231198 sshd[4000]: Connection closed by 10.0.0.1 port 48286 May 14 23:53:56.232093 sshd-session[3997]: pam_unix(sshd:session): session closed for user core May 14 23:53:56.244070 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:48286.service: Deactivated successfully. May 14 23:53:56.245641 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:53:56.247343 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. May 14 23:53:56.250444 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:48288.service - OpenSSH per-connection server daemon (10.0.0.1:48288). May 14 23:53:56.253759 systemd-logind[1432]: Removed session 14. May 14 23:53:56.306821 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 48288 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:53:56.308241 sshd-session[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:56.312098 systemd-logind[1432]: New session 15 of user core. May 14 23:53:56.328091 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:53:56.443154 sshd[4014]: Connection closed by 10.0.0.1 port 48288 May 14 23:53:56.443914 sshd-session[4011]: pam_unix(sshd:session): session closed for user core May 14 23:53:56.448076 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:48288.service: Deactivated successfully. May 14 23:53:56.450325 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:53:56.451408 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. May 14 23:53:56.452238 systemd-logind[1432]: Removed session 15. May 14 23:54:01.459150 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). May 14 23:54:01.513815 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:01.515731 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:01.520442 systemd-logind[1432]: New session 16 of user core. May 14 23:54:01.529522 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:54:01.648390 sshd[4029]: Connection closed by 10.0.0.1 port 48298 May 14 23:54:01.648927 sshd-session[4027]: pam_unix(sshd:session): session closed for user core May 14 23:54:01.652757 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:48298.service: Deactivated successfully. May 14 23:54:01.654774 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:54:01.655745 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. May 14 23:54:01.656587 systemd-logind[1432]: Removed session 16. May 14 23:54:06.659473 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). May 14 23:54:06.709089 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:06.710433 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:06.714969 systemd-logind[1432]: New session 17 of user core. May 14 23:54:06.729073 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:54:06.838359 sshd[4044]: Connection closed by 10.0.0.1 port 34376 May 14 23:54:06.838708 sshd-session[4042]: pam_unix(sshd:session): session closed for user core May 14 23:54:06.851648 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:34376.service: Deactivated successfully. May 14 23:54:06.853376 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:54:06.854634 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. May 14 23:54:06.855789 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384). May 14 23:54:06.857301 systemd-logind[1432]: Removed session 17. May 14 23:54:06.907019 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:06.908287 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:06.917020 systemd-logind[1432]: New session 18 of user core. May 14 23:54:06.922027 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:54:07.125059 sshd[4060]: Connection closed by 10.0.0.1 port 34384 May 14 23:54:07.126267 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 14 23:54:07.136466 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:34384.service: Deactivated successfully. May 14 23:54:07.138381 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:54:07.141241 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. May 14 23:54:07.142778 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:34392.service - OpenSSH per-connection server daemon (10.0.0.1:34392). May 14 23:54:07.144135 systemd-logind[1432]: Removed session 18. May 14 23:54:07.200855 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 34392 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:07.202167 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:07.206655 systemd-logind[1432]: New session 19 of user core. May 14 23:54:07.216018 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:54:08.466739 sshd[4073]: Connection closed by 10.0.0.1 port 34392 May 14 23:54:08.466147 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 14 23:54:08.480920 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:34392.service: Deactivated successfully. May 14 23:54:08.483170 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:54:08.483550 systemd[1]: session-19.scope: Consumed 465ms CPU time, 64.4M memory peak. May 14 23:54:08.485293 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. May 14 23:54:08.488440 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:34400.service - OpenSSH per-connection server daemon (10.0.0.1:34400). May 14 23:54:08.489564 systemd-logind[1432]: Removed session 19. May 14 23:54:08.540997 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:08.542352 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:08.547885 systemd-logind[1432]: New session 20 of user core. May 14 23:54:08.562042 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:54:08.782336 sshd[4097]: Connection closed by 10.0.0.1 port 34400 May 14 23:54:08.782778 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 14 23:54:08.800392 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:34400.service: Deactivated successfully. May 14 23:54:08.802085 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:54:08.803410 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. May 14 23:54:08.805347 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:34412.service - OpenSSH per-connection server daemon (10.0.0.1:34412). May 14 23:54:08.807095 systemd-logind[1432]: Removed session 20. May 14 23:54:08.856369 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:08.857584 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:08.862251 systemd-logind[1432]: New session 21 of user core. May 14 23:54:08.869056 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:54:08.985158 sshd[4111]: Connection closed by 10.0.0.1 port 34412 May 14 23:54:08.985494 sshd-session[4108]: pam_unix(sshd:session): session closed for user core May 14 23:54:08.989038 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:34412.service: Deactivated successfully. May 14 23:54:08.990813 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:54:08.991805 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. May 14 23:54:08.992663 systemd-logind[1432]: Removed session 21. May 14 23:54:13.997508 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672). May 14 23:54:14.050076 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:14.051359 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:14.057594 systemd-logind[1432]: New session 22 of user core. May 14 23:54:14.065243 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:54:14.175095 sshd[4133]: Connection closed by 10.0.0.1 port 53672 May 14 23:54:14.175447 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 14 23:54:14.179605 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:53672.service: Deactivated successfully. May 14 23:54:14.181254 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:54:14.186538 systemd-logind[1432]: Session 22 logged out. Waiting for processes to exit. May 14 23:54:14.187628 systemd-logind[1432]: Removed session 22. May 14 23:54:19.202331 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:53674.service - OpenSSH per-connection server daemon (10.0.0.1:53674). May 14 23:54:19.242403 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 53674 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:19.243678 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:19.248253 systemd-logind[1432]: New session 23 of user core. May 14 23:54:19.262090 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:54:19.394325 sshd[4148]: Connection closed by 10.0.0.1 port 53674 May 14 23:54:19.395084 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 14 23:54:19.402561 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:53674.service: Deactivated successfully. May 14 23:54:19.406323 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:54:19.407309 systemd-logind[1432]: Session 23 logged out. Waiting for processes to exit. May 14 23:54:19.409789 systemd-logind[1432]: Removed session 23. May 14 23:54:24.406674 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:33750.service - OpenSSH per-connection server daemon (10.0.0.1:33750). May 14 23:54:24.459959 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 33750 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:24.461308 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:24.465857 systemd-logind[1432]: New session 24 of user core. May 14 23:54:24.477059 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:54:24.584449 sshd[4165]: Connection closed by 10.0.0.1 port 33750 May 14 23:54:24.585794 sshd-session[4163]: pam_unix(sshd:session): session closed for user core May 14 23:54:24.599603 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:33750.service: Deactivated successfully. May 14 23:54:24.601273 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:54:24.603993 systemd-logind[1432]: Session 24 logged out. Waiting for processes to exit. May 14 23:54:24.604529 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:33760.service - OpenSSH per-connection server daemon (10.0.0.1:33760). May 14 23:54:24.607769 systemd-logind[1432]: Removed session 24. May 14 23:54:24.666438 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 33760 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:24.667735 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:24.675947 systemd-logind[1432]: New session 25 of user core. May 14 23:54:24.686094 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:54:26.700253 containerd[1449]: time="2025-05-14T23:54:26.700196781Z" level=info msg="StopContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" with timeout 30 (s)" May 14 23:54:26.700639 containerd[1449]: time="2025-05-14T23:54:26.700593183Z" level=info msg="Stop container \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" with signal terminated" May 14 23:54:26.710108 systemd[1]: cri-containerd-50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc.scope: Deactivated successfully. May 14 23:54:26.718548 containerd[1449]: time="2025-05-14T23:54:26.711970759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" id:\"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" pid:3128 exited_at:{seconds:1747266866 nanos:711624037}" May 14 23:54:26.720028 containerd[1449]: time="2025-05-14T23:54:26.719984278Z" level=info msg="received exit event container_id:\"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" id:\"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" pid:3128 exited_at:{seconds:1747266866 nanos:711624037}" May 14 23:54:26.726675 containerd[1449]: time="2025-05-14T23:54:26.726636710Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:54:26.730958 containerd[1449]: time="2025-05-14T23:54:26.730921531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" id:\"0da7d5389e0237d4a43f3bd917cf5a6b7de435ffe9a3509d502108de1285ee11\" pid:4207 exited_at:{seconds:1747266866 nanos:730698050}" May 14 23:54:26.732857 containerd[1449]: time="2025-05-14T23:54:26.732824781Z" level=info msg="StopContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" with timeout 2 (s)" May 14 23:54:26.733423 containerd[1449]: time="2025-05-14T23:54:26.733297383Z" level=info msg="Stop container \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" with signal terminated" May 14 23:54:26.737795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc-rootfs.mount: Deactivated successfully. May 14 23:54:26.741300 systemd-networkd[1393]: lxc_health: Link DOWN May 14 23:54:26.741307 systemd-networkd[1393]: lxc_health: Lost carrier May 14 23:54:26.751952 containerd[1449]: time="2025-05-14T23:54:26.751889874Z" level=info msg="StopContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" returns successfully" May 14 23:54:26.754553 containerd[1449]: time="2025-05-14T23:54:26.754524607Z" level=info msg="StopPodSandbox for \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\"" May 14 23:54:26.754627 containerd[1449]: time="2025-05-14T23:54:26.754603567Z" level=info msg="Container to stop \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.758619 systemd[1]: cri-containerd-96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913.scope: Deactivated successfully. May 14 23:54:26.758961 systemd[1]: cri-containerd-96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913.scope: Consumed 6.751s CPU time, 123M memory peak, 136K read from disk, 12.9M written to disk. May 14 23:54:26.759450 containerd[1449]: time="2025-05-14T23:54:26.759355190Z" level=info msg="received exit event container_id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" pid:3237 exited_at:{seconds:1747266866 nanos:759042789}" May 14 23:54:26.760824 containerd[1449]: time="2025-05-14T23:54:26.759444591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" id:\"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" pid:3237 exited_at:{seconds:1747266866 nanos:759042789}" May 14 23:54:26.763262 systemd[1]: cri-containerd-f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555.scope: Deactivated successfully. May 14 23:54:26.771716 containerd[1449]: time="2025-05-14T23:54:26.771681771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" id:\"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" pid:2844 exit_status:137 exited_at:{seconds:1747266866 nanos:771039767}" May 14 23:54:26.785897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913-rootfs.mount: Deactivated successfully. May 14 23:54:26.795351 containerd[1449]: time="2025-05-14T23:54:26.795294686Z" level=info msg="StopContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" returns successfully" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795731768Z" level=info msg="StopPodSandbox for \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\"" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795790408Z" level=info msg="Container to stop \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795801929Z" level=info msg="Container to stop \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795810409Z" level=info msg="Container to stop \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795818289Z" level=info msg="Container to stop \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.796058 containerd[1449]: time="2025-05-14T23:54:26.795825689Z" level=info msg="Container to stop \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:54:26.803716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555-rootfs.mount: Deactivated successfully. May 14 23:54:26.804445 systemd[1]: cri-containerd-4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e.scope: Deactivated successfully. May 14 23:54:26.806680 containerd[1449]: time="2025-05-14T23:54:26.806488781Z" level=info msg="shim disconnected" id=f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555 namespace=k8s.io May 14 23:54:26.806680 containerd[1449]: time="2025-05-14T23:54:26.806520221Z" level=warning msg="cleaning up after shim disconnected" id=f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555 namespace=k8s.io May 14 23:54:26.806680 containerd[1449]: time="2025-05-14T23:54:26.806549421Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:54:26.820532 containerd[1449]: time="2025-05-14T23:54:26.820492409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" id:\"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" pid:2733 exit_status:137 exited_at:{seconds:1747266866 nanos:807098904}" May 14 23:54:26.822079 containerd[1449]: time="2025-05-14T23:54:26.822035457Z" level=info msg="TearDown network for sandbox \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" successfully" May 14 23:54:26.822079 containerd[1449]: time="2025-05-14T23:54:26.822067297Z" level=info msg="StopPodSandbox for \"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" returns successfully" May 14 23:54:26.822522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555-shm.mount: Deactivated successfully. May 14 23:54:26.823562 containerd[1449]: time="2025-05-14T23:54:26.823531304Z" level=info msg="received exit event sandbox_id:\"f4cca07bc45e852f9d6a10522c511a0df556e64b0e23c9b43a224de4f190a555\" exit_status:137 exited_at:{seconds:1747266866 nanos:771039767}" May 14 23:54:26.825300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e-rootfs.mount: Deactivated successfully. May 14 23:54:26.831400 containerd[1449]: time="2025-05-14T23:54:26.831369702Z" level=info msg="received exit event sandbox_id:\"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" exit_status:137 exited_at:{seconds:1747266866 nanos:807098904}" May 14 23:54:26.831777 containerd[1449]: time="2025-05-14T23:54:26.831543743Z" level=info msg="shim disconnected" id=4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e namespace=k8s.io May 14 23:54:26.831777 containerd[1449]: time="2025-05-14T23:54:26.831562183Z" level=warning msg="cleaning up after shim disconnected" id=4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e namespace=k8s.io May 14 23:54:26.831777 containerd[1449]: time="2025-05-14T23:54:26.831743504Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:54:26.832089 containerd[1449]: time="2025-05-14T23:54:26.832008426Z" level=info msg="TearDown network for sandbox \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" successfully" May 14 23:54:26.832089 containerd[1449]: time="2025-05-14T23:54:26.832035306Z" level=info msg="StopPodSandbox for \"4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e\" returns successfully" May 14 23:54:26.958773 kubelet[2591]: I0514 23:54:26.958593 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-net\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.958773 kubelet[2591]: I0514 23:54:26.958691 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-hostproc\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.958773 kubelet[2591]: I0514 23:54:26.958711 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cni-path\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.958773 kubelet[2591]: I0514 23:54:26.958730 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-lib-modules\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.958773 kubelet[2591]: I0514 23:54:26.958750 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c2799d-929b-42f6-8c54-16764acabe65-cilium-config-path\") pod \"b5c2799d-929b-42f6-8c54-16764acabe65\" (UID: \"b5c2799d-929b-42f6-8c54-16764acabe65\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959412 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2151e29-4440-47e8-b48b-314528425e07-cilium-config-path\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959435 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-run\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959453 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-xtables-lock\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959476 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-cgroup\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959494 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-etc-cni-netd\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960485 kubelet[2591]: I0514 23:54:26.959512 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gns44\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-kube-api-access-gns44\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960620 kubelet[2591]: I0514 23:54:26.959527 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-hubble-tls\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960620 kubelet[2591]: I0514 23:54:26.959541 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-kernel\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960620 kubelet[2591]: I0514 23:54:26.959558 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnkf\" (UniqueName: \"kubernetes.io/projected/b5c2799d-929b-42f6-8c54-16764acabe65-kube-api-access-jfnkf\") pod \"b5c2799d-929b-42f6-8c54-16764acabe65\" (UID: \"b5c2799d-929b-42f6-8c54-16764acabe65\") " May 14 23:54:26.960620 kubelet[2591]: I0514 23:54:26.959575 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2151e29-4440-47e8-b48b-314528425e07-clustermesh-secrets\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.960620 kubelet[2591]: I0514 23:54:26.959589 2591 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-bpf-maps\") pod \"e2151e29-4440-47e8-b48b-314528425e07\" (UID: \"e2151e29-4440-47e8-b48b-314528425e07\") " May 14 23:54:26.961793 kubelet[2591]: I0514 23:54:26.961763 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.961916 kubelet[2591]: I0514 23:54:26.961763 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.961916 kubelet[2591]: I0514 23:54:26.961841 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.962084 kubelet[2591]: I0514 23:54:26.961872 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.962084 kubelet[2591]: I0514 23:54:26.961952 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.962084 kubelet[2591]: I0514 23:54:26.961763 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.964410 kubelet[2591]: I0514 23:54:26.963543 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.964410 kubelet[2591]: I0514 23:54:26.963752 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c2799d-929b-42f6-8c54-16764acabe65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5c2799d-929b-42f6-8c54-16764acabe65" (UID: "b5c2799d-929b-42f6-8c54-16764acabe65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:54:26.964751 kubelet[2591]: I0514 23:54:26.964720 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:54:26.964812 kubelet[2591]: I0514 23:54:26.964771 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.964812 kubelet[2591]: I0514 23:54:26.964788 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.964904 kubelet[2591]: I0514 23:54:26.964859 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-kube-api-access-gns44" (OuterVolumeSpecName: "kube-api-access-gns44") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "kube-api-access-gns44". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:54:26.964945 kubelet[2591]: I0514 23:54:26.964914 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:54:26.966000 kubelet[2591]: I0514 23:54:26.965956 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2151e29-4440-47e8-b48b-314528425e07-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:54:26.966847 kubelet[2591]: I0514 23:54:26.966815 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2151e29-4440-47e8-b48b-314528425e07-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2151e29-4440-47e8-b48b-314528425e07" (UID: "e2151e29-4440-47e8-b48b-314528425e07"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 23:54:26.966996 kubelet[2591]: I0514 23:54:26.966967 2591 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c2799d-929b-42f6-8c54-16764acabe65-kube-api-access-jfnkf" (OuterVolumeSpecName: "kube-api-access-jfnkf") pod "b5c2799d-929b-42f6-8c54-16764acabe65" (UID: "b5c2799d-929b-42f6-8c54-16764acabe65"). InnerVolumeSpecName "kube-api-access-jfnkf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:54:27.060251 kubelet[2591]: I0514 23:54:27.060215 2591 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2151e29-4440-47e8-b48b-314528425e07-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060436 2591 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060455 2591 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060464 2591 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060472 2591 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060480 2591 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060491 2591 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c2799d-929b-42f6-8c54-16764acabe65-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060498 2591 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060578 kubelet[2591]: I0514 23:54:27.060505 2591 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2151e29-4440-47e8-b48b-314528425e07-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060514 2591 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060522 2591 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060529 2591 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gns44\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-kube-api-access-gns44\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060537 2591 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2151e29-4440-47e8-b48b-314528425e07-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060545 2591 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060554 2591 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2151e29-4440-47e8-b48b-314528425e07-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.060839 kubelet[2591]: I0514 23:54:27.060561 2591 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jfnkf\" (UniqueName: \"kubernetes.io/projected/b5c2799d-929b-42f6-8c54-16764acabe65-kube-api-access-jfnkf\") on node \"localhost\" DevicePath \"\"" May 14 23:54:27.320196 systemd[1]: Removed slice kubepods-burstable-pode2151e29_4440_47e8_b48b_314528425e07.slice - libcontainer container kubepods-burstable-pode2151e29_4440_47e8_b48b_314528425e07.slice. May 14 23:54:27.320296 systemd[1]: kubepods-burstable-pode2151e29_4440_47e8_b48b_314528425e07.slice: Consumed 6.917s CPU time, 123.3M memory peak, 148K read from disk, 12.9M written to disk. May 14 23:54:27.321736 systemd[1]: Removed slice kubepods-besteffort-podb5c2799d_929b_42f6_8c54_16764acabe65.slice - libcontainer container kubepods-besteffort-podb5c2799d_929b_42f6_8c54_16764acabe65.slice. May 14 23:54:27.503895 kubelet[2591]: I0514 23:54:27.503845 2591 scope.go:117] "RemoveContainer" containerID="96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913" May 14 23:54:27.507724 containerd[1449]: time="2025-05-14T23:54:27.507687024Z" level=info msg="RemoveContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\"" May 14 23:54:27.522051 containerd[1449]: time="2025-05-14T23:54:27.522004252Z" level=info msg="RemoveContainer for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" returns successfully" May 14 23:54:27.522290 kubelet[2591]: I0514 23:54:27.522264 2591 scope.go:117] "RemoveContainer" containerID="e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8" May 14 23:54:27.525878 containerd[1449]: time="2025-05-14T23:54:27.525675589Z" level=info msg="RemoveContainer for \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\"" May 14 23:54:27.530736 containerd[1449]: time="2025-05-14T23:54:27.530698533Z" level=info msg="RemoveContainer for \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" returns successfully" May 14 23:54:27.531008 kubelet[2591]: I0514 23:54:27.530976 2591 scope.go:117] "RemoveContainer" containerID="f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94" May 14 23:54:27.533036 containerd[1449]: time="2025-05-14T23:54:27.532998424Z" level=info msg="RemoveContainer for \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\"" May 14 23:54:27.536336 containerd[1449]: time="2025-05-14T23:54:27.536301800Z" level=info msg="RemoveContainer for \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" returns successfully" May 14 23:54:27.536975 kubelet[2591]: I0514 23:54:27.536480 2591 scope.go:117] "RemoveContainer" containerID="349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe" May 14 23:54:27.537895 containerd[1449]: time="2025-05-14T23:54:27.537857247Z" level=info msg="RemoveContainer for \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\"" May 14 23:54:27.540415 containerd[1449]: time="2025-05-14T23:54:27.540391739Z" level=info msg="RemoveContainer for \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" returns successfully" May 14 23:54:27.540565 kubelet[2591]: I0514 23:54:27.540541 2591 scope.go:117] "RemoveContainer" containerID="9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2" May 14 23:54:27.542346 containerd[1449]: time="2025-05-14T23:54:27.541858226Z" level=info msg="RemoveContainer for \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\"" May 14 23:54:27.544384 containerd[1449]: time="2025-05-14T23:54:27.544341118Z" level=info msg="RemoveContainer for \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" returns successfully" May 14 23:54:27.544623 kubelet[2591]: I0514 23:54:27.544594 2591 scope.go:117] "RemoveContainer" containerID="96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913" May 14 23:54:27.549106 containerd[1449]: time="2025-05-14T23:54:27.544771240Z" level=error msg="ContainerStatus for \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\": not found" May 14 23:54:27.552904 kubelet[2591]: E0514 23:54:27.552859 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\": not found" containerID="96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913" May 14 23:54:27.553007 kubelet[2591]: I0514 23:54:27.552910 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913"} err="failed to get container status \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\": rpc error: code = NotFound desc = an error occurred when try to find container \"96a06ce9bf053aa5953f586055830d10edcf2aef9a23538f506473b10d71c913\": not found" May 14 23:54:27.553007 kubelet[2591]: I0514 23:54:27.553005 2591 scope.go:117] "RemoveContainer" containerID="e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8" May 14 23:54:27.553313 containerd[1449]: time="2025-05-14T23:54:27.553270721Z" level=error msg="ContainerStatus for \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\": not found" May 14 23:54:27.553561 kubelet[2591]: E0514 23:54:27.553540 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\": not found" containerID="e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8" May 14 23:54:27.553735 kubelet[2591]: I0514 23:54:27.553628 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8"} err="failed to get container status \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8173e5520bb855fc45a814efd224fdcfa36da9344d1b32b9a7543bfc47350b8\": not found" May 14 23:54:27.553735 kubelet[2591]: I0514 23:54:27.553650 2591 scope.go:117] "RemoveContainer" containerID="f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94" May 14 23:54:27.553983 containerd[1449]: time="2025-05-14T23:54:27.553950684Z" level=error msg="ContainerStatus for \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\": not found" May 14 23:54:27.554083 kubelet[2591]: E0514 23:54:27.554054 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\": not found" containerID="f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94" May 14 23:54:27.554083 kubelet[2591]: I0514 23:54:27.554081 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94"} err="failed to get container status \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\": rpc error: code = NotFound desc = an error occurred when try to find container \"f02bc894e6166c2cb86f90436af88f3b59c387620c6ca6bb7a3e17c007679e94\": not found" May 14 23:54:27.554159 kubelet[2591]: I0514 23:54:27.554098 2591 scope.go:117] "RemoveContainer" containerID="349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe" May 14 23:54:27.554337 containerd[1449]: time="2025-05-14T23:54:27.554266726Z" level=error msg="ContainerStatus for \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\": not found" May 14 23:54:27.554501 kubelet[2591]: E0514 23:54:27.554477 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\": not found" containerID="349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe" May 14 23:54:27.554547 kubelet[2591]: I0514 23:54:27.554507 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe"} err="failed to get container status \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"349a262b1eb0e1370cd1b99ed5e43b1c025e11781980bf2c526db41e5c8592fe\": not found" May 14 23:54:27.554547 kubelet[2591]: I0514 23:54:27.554523 2591 scope.go:117] "RemoveContainer" containerID="9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2" May 14 23:54:27.554731 containerd[1449]: time="2025-05-14T23:54:27.554692968Z" level=error msg="ContainerStatus for \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\": not found" May 14 23:54:27.555156 kubelet[2591]: E0514 23:54:27.554973 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\": not found" containerID="9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2" May 14 23:54:27.555156 kubelet[2591]: I0514 23:54:27.555003 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2"} err="failed to get container status \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b71f356cf58b2ac90f8e9490ba7610178f0f58894f5708951ca3b40b3561ea2\": not found" May 14 23:54:27.555156 kubelet[2591]: I0514 23:54:27.555022 2591 scope.go:117] "RemoveContainer" containerID="50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc" May 14 23:54:27.557663 containerd[1449]: time="2025-05-14T23:54:27.557160779Z" level=info msg="RemoveContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\"" May 14 23:54:27.559771 containerd[1449]: time="2025-05-14T23:54:27.559738232Z" level=info msg="RemoveContainer for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" returns successfully" May 14 23:54:27.560056 kubelet[2591]: I0514 23:54:27.560017 2591 scope.go:117] "RemoveContainer" containerID="50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc" May 14 23:54:27.560337 containerd[1449]: time="2025-05-14T23:54:27.560300834Z" level=error msg="ContainerStatus for \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\": not found" May 14 23:54:27.560558 kubelet[2591]: E0514 23:54:27.560531 2591 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\": not found" containerID="50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc" May 14 23:54:27.560613 kubelet[2591]: I0514 23:54:27.560564 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc"} err="failed to get container status \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\": rpc error: code = NotFound desc = an error occurred when try to find container \"50c94184ea651bd4afed0d2da15305a42acae3c34a70b2792d891afa62c1eadc\": not found" May 14 23:54:27.737825 systemd[1]: var-lib-kubelet-pods-b5c2799d\x2d929b\x2d42f6\x2d8c54\x2d16764acabe65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfnkf.mount: Deactivated successfully. May 14 23:54:27.737944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b5afadc78d6ff89559c4ebe06af8fa1158f2a435c0e6ceaaf57354944b55d1e-shm.mount: Deactivated successfully. May 14 23:54:27.738000 systemd[1]: var-lib-kubelet-pods-e2151e29\x2d4440\x2d47e8\x2db48b\x2d314528425e07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgns44.mount: Deactivated successfully. May 14 23:54:27.738058 systemd[1]: var-lib-kubelet-pods-e2151e29\x2d4440\x2d47e8\x2db48b\x2d314528425e07-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:54:27.738118 systemd[1]: var-lib-kubelet-pods-e2151e29\x2d4440\x2d47e8\x2db48b\x2d314528425e07-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:54:28.357482 kubelet[2591]: E0514 23:54:28.357433 2591 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:54:28.567372 sshd[4180]: Connection closed by 10.0.0.1 port 33760 May 14 23:54:28.567690 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 14 23:54:28.580134 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:33760.service: Deactivated successfully. May 14 23:54:28.581658 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:54:28.581877 systemd[1]: session-25.scope: Consumed 1.256s CPU time, 28.2M memory peak. May 14 23:54:28.582340 systemd-logind[1432]: Session 25 logged out. Waiting for processes to exit. May 14 23:54:28.584085 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:33766.service - OpenSSH per-connection server daemon (10.0.0.1:33766). May 14 23:54:28.585310 systemd-logind[1432]: Removed session 25. May 14 23:54:28.633271 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 33766 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:28.634838 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:28.639117 systemd-logind[1432]: New session 26 of user core. May 14 23:54:28.649012 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:54:29.314833 kubelet[2591]: I0514 23:54:29.313989 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c2799d-929b-42f6-8c54-16764acabe65" path="/var/lib/kubelet/pods/b5c2799d-929b-42f6-8c54-16764acabe65/volumes" May 14 23:54:29.314833 kubelet[2591]: I0514 23:54:29.314416 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2151e29-4440-47e8-b48b-314528425e07" path="/var/lib/kubelet/pods/e2151e29-4440-47e8-b48b-314528425e07/volumes" May 14 23:54:29.346933 sshd[4334]: Connection closed by 10.0.0.1 port 33766 May 14 23:54:29.348229 sshd-session[4331]: pam_unix(sshd:session): session closed for user core May 14 23:54:29.359233 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:33766.service: Deactivated successfully. May 14 23:54:29.362758 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:54:29.364860 systemd-logind[1432]: Session 26 logged out. Waiting for processes to exit. May 14 23:54:29.366947 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). May 14 23:54:29.369422 systemd-logind[1432]: Removed session 26. May 14 23:54:29.371614 kubelet[2591]: E0514 23:54:29.371542 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5c2799d-929b-42f6-8c54-16764acabe65" containerName="cilium-operator" May 14 23:54:29.371614 kubelet[2591]: E0514 23:54:29.371614 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="mount-bpf-fs" May 14 23:54:29.372354 kubelet[2591]: E0514 23:54:29.371624 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="apply-sysctl-overwrites" May 14 23:54:29.372354 kubelet[2591]: E0514 23:54:29.371633 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="mount-cgroup" May 14 23:54:29.372354 kubelet[2591]: E0514 23:54:29.371639 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="clean-cilium-state" May 14 23:54:29.372354 kubelet[2591]: E0514 23:54:29.371646 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="cilium-agent" May 14 23:54:29.372354 kubelet[2591]: I0514 23:54:29.371669 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c2799d-929b-42f6-8c54-16764acabe65" containerName="cilium-operator" May 14 23:54:29.372354 kubelet[2591]: I0514 23:54:29.371676 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2151e29-4440-47e8-b48b-314528425e07" containerName="cilium-agent" May 14 23:54:29.404667 systemd[1]: Created slice kubepods-burstable-podfb671b08_f11c_4fc2_97b7_39d46c3bfc5d.slice - libcontainer container kubepods-burstable-podfb671b08_f11c_4fc2_97b7_39d46c3bfc5d.slice. May 14 23:54:29.435093 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:29.436541 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:29.440432 systemd-logind[1432]: New session 27 of user core. May 14 23:54:29.452019 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:54:29.501701 sshd[4349]: Connection closed by 10.0.0.1 port 33768 May 14 23:54:29.502122 sshd-session[4345]: pam_unix(sshd:session): session closed for user core May 14 23:54:29.514245 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:33768.service: Deactivated successfully. May 14 23:54:29.515974 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:54:29.517333 systemd-logind[1432]: Session 27 logged out. Waiting for processes to exit. May 14 23:54:29.518651 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). May 14 23:54:29.519441 systemd-logind[1432]: Removed session 27. May 14 23:54:29.565914 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:JGpbUaP68o+c7lzfweYm3nii0Q3Tlu54nJ930R9/dFs May 14 23:54:29.567137 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:29.570871 systemd-logind[1432]: New session 28 of user core. May 14 23:54:29.575307 kubelet[2591]: I0514 23:54:29.575278 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-cni-path\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575399 kubelet[2591]: I0514 23:54:29.575314 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-lib-modules\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575399 kubelet[2591]: I0514 23:54:29.575343 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-cilium-ipsec-secrets\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575399 kubelet[2591]: I0514 23:54:29.575382 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-cilium-cgroup\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575399 kubelet[2591]: I0514 23:54:29.575400 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-bpf-maps\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575490 kubelet[2591]: I0514 23:54:29.575414 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-hostproc\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575490 kubelet[2591]: I0514 23:54:29.575429 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-cilium-run\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575490 kubelet[2591]: I0514 23:54:29.575445 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-host-proc-sys-kernel\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575490 kubelet[2591]: I0514 23:54:29.575461 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-cilium-config-path\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575490 kubelet[2591]: I0514 23:54:29.575478 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-etc-cni-netd\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575601 kubelet[2591]: I0514 23:54:29.575492 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-hubble-tls\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575601 kubelet[2591]: I0514 23:54:29.575509 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-host-proc-sys-net\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575601 kubelet[2591]: I0514 23:54:29.575523 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-264g5\" (UniqueName: \"kubernetes.io/projected/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-kube-api-access-264g5\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575601 kubelet[2591]: I0514 23:54:29.575538 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-xtables-lock\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.575601 kubelet[2591]: I0514 23:54:29.575555 2591 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb671b08-f11c-4fc2-97b7-39d46c3bfc5d-clustermesh-secrets\") pod \"cilium-2vdrl\" (UID: \"fb671b08-f11c-4fc2-97b7-39d46c3bfc5d\") " pod="kube-system/cilium-2vdrl" May 14 23:54:29.581019 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:54:29.708045 kubelet[2591]: E0514 23:54:29.708004 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:29.708571 containerd[1449]: time="2025-05-14T23:54:29.708525001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vdrl,Uid:fb671b08-f11c-4fc2-97b7-39d46c3bfc5d,Namespace:kube-system,Attempt:0,}" May 14 23:54:29.721550 containerd[1449]: time="2025-05-14T23:54:29.721507020Z" level=info msg="connecting to shim 2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" namespace=k8s.io protocol=ttrpc version=3 May 14 23:54:29.745450 systemd[1]: Started cri-containerd-2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745.scope - libcontainer container 2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745. May 14 23:54:29.766671 containerd[1449]: time="2025-05-14T23:54:29.766630143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vdrl,Uid:fb671b08-f11c-4fc2-97b7-39d46c3bfc5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\"" May 14 23:54:29.767313 kubelet[2591]: E0514 23:54:29.767289 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:29.769486 containerd[1449]: time="2025-05-14T23:54:29.769450276Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:54:29.774886 containerd[1449]: time="2025-05-14T23:54:29.774821700Z" level=info msg="Container 6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85: CDI devices from CRI Config.CDIDevices: []" May 14 23:54:29.780764 containerd[1449]: time="2025-05-14T23:54:29.780702287Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\"" May 14 23:54:29.782505 containerd[1449]: time="2025-05-14T23:54:29.781389050Z" level=info msg="StartContainer for \"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\"" May 14 23:54:29.782505 containerd[1449]: time="2025-05-14T23:54:29.782218494Z" level=info msg="connecting to shim 6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" protocol=ttrpc version=3 May 14 23:54:29.805096 systemd[1]: Started cri-containerd-6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85.scope - libcontainer container 6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85. May 14 23:54:29.829523 containerd[1449]: time="2025-05-14T23:54:29.829416067Z" level=info msg="StartContainer for \"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\" returns successfully" May 14 23:54:29.844072 systemd[1]: cri-containerd-6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85.scope: Deactivated successfully. May 14 23:54:29.848195 containerd[1449]: time="2025-05-14T23:54:29.848106111Z" level=info msg="received exit event container_id:\"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\" id:\"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\" pid:4429 exited_at:{seconds:1747266869 nanos:847836230}" May 14 23:54:29.848377 containerd[1449]: time="2025-05-14T23:54:29.848243672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\" id:\"6333a6bca4c6efaf73b569e6a1dee461c86e831fe61de99bfa396d4c47998a85\" pid:4429 exited_at:{seconds:1747266869 nanos:847836230}" May 14 23:54:30.514359 kubelet[2591]: E0514 23:54:30.513849 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:30.518231 containerd[1449]: time="2025-05-14T23:54:30.517958074Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:54:30.525392 containerd[1449]: time="2025-05-14T23:54:30.524710664Z" level=info msg="Container 52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15: CDI devices from CRI Config.CDIDevices: []" May 14 23:54:30.533708 containerd[1449]: time="2025-05-14T23:54:30.533670703Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\"" May 14 23:54:30.534878 containerd[1449]: time="2025-05-14T23:54:30.534262906Z" level=info msg="StartContainer for \"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\"" May 14 23:54:30.536038 containerd[1449]: time="2025-05-14T23:54:30.536012473Z" level=info msg="connecting to shim 52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" protocol=ttrpc version=3 May 14 23:54:30.562047 systemd[1]: Started cri-containerd-52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15.scope - libcontainer container 52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15. May 14 23:54:30.588922 containerd[1449]: time="2025-05-14T23:54:30.588805986Z" level=info msg="StartContainer for \"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\" returns successfully" May 14 23:54:30.593022 systemd[1]: cri-containerd-52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15.scope: Deactivated successfully. May 14 23:54:30.596072 containerd[1449]: time="2025-05-14T23:54:30.596031577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\" id:\"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\" pid:4475 exited_at:{seconds:1747266870 nanos:595289654}" May 14 23:54:30.596072 containerd[1449]: time="2025-05-14T23:54:30.596039657Z" level=info msg="received exit event container_id:\"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\" id:\"52fdfbfe887466db99929c49042531de8ada66cc0eb80ff2c3426778e7fe1f15\" pid:4475 exited_at:{seconds:1747266870 nanos:595289654}" May 14 23:54:31.517172 kubelet[2591]: E0514 23:54:31.517130 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:31.519802 containerd[1449]: time="2025-05-14T23:54:31.519735859Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:54:31.576905 containerd[1449]: time="2025-05-14T23:54:31.576649583Z" level=info msg="Container ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb: CDI devices from CRI Config.CDIDevices: []" May 14 23:54:31.581397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010631755.mount: Deactivated successfully. May 14 23:54:31.586108 containerd[1449]: time="2025-05-14T23:54:31.586051343Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\"" May 14 23:54:31.586661 containerd[1449]: time="2025-05-14T23:54:31.586636706Z" level=info msg="StartContainer for \"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\"" May 14 23:54:31.588045 containerd[1449]: time="2025-05-14T23:54:31.588001112Z" level=info msg="connecting to shim ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" protocol=ttrpc version=3 May 14 23:54:31.608069 systemd[1]: Started cri-containerd-ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb.scope - libcontainer container ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb. May 14 23:54:31.638144 systemd[1]: cri-containerd-ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb.scope: Deactivated successfully. May 14 23:54:31.640120 containerd[1449]: time="2025-05-14T23:54:31.640008614Z" level=info msg="received exit event container_id:\"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\" id:\"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\" pid:4519 exited_at:{seconds:1747266871 nanos:639848454}" May 14 23:54:31.640207 containerd[1449]: time="2025-05-14T23:54:31.640179295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\" id:\"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\" pid:4519 exited_at:{seconds:1747266871 nanos:639848454}" May 14 23:54:31.647881 containerd[1449]: time="2025-05-14T23:54:31.647783088Z" level=info msg="StartContainer for \"ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb\" returns successfully" May 14 23:54:31.657578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea3053578c520d9fe82f2b320406b27e53402fe3ebb043fe654a7f8e28888ebb-rootfs.mount: Deactivated successfully. May 14 23:54:32.310400 kubelet[2591]: E0514 23:54:32.310363 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:32.521586 kubelet[2591]: E0514 23:54:32.521559 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:32.524089 containerd[1449]: time="2025-05-14T23:54:32.524055223Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:54:32.535580 containerd[1449]: time="2025-05-14T23:54:32.534740867Z" level=info msg="Container fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a: CDI devices from CRI Config.CDIDevices: []" May 14 23:54:32.545438 containerd[1449]: time="2025-05-14T23:54:32.545397192Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\"" May 14 23:54:32.547217 containerd[1449]: time="2025-05-14T23:54:32.546079675Z" level=info msg="StartContainer for \"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\"" May 14 23:54:32.547217 containerd[1449]: time="2025-05-14T23:54:32.546950598Z" level=info msg="connecting to shim fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" protocol=ttrpc version=3 May 14 23:54:32.568080 systemd[1]: Started cri-containerd-fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a.scope - libcontainer container fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a. May 14 23:54:32.590379 systemd[1]: cri-containerd-fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a.scope: Deactivated successfully. May 14 23:54:32.592727 containerd[1449]: time="2025-05-14T23:54:32.590710341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\" id:\"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\" pid:4557 exited_at:{seconds:1747266872 nanos:590490380}" May 14 23:54:32.592727 containerd[1449]: time="2025-05-14T23:54:32.592498548Z" level=info msg="received exit event container_id:\"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\" id:\"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\" pid:4557 exited_at:{seconds:1747266872 nanos:590490380}" May 14 23:54:32.593466 containerd[1449]: time="2025-05-14T23:54:32.593439392Z" level=info msg="StartContainer for \"fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a\" returns successfully" May 14 23:54:32.610184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf553d58c8b3344ea06c70f7a646405672fb6c69d75979ba85a2b2d47df819a-rootfs.mount: Deactivated successfully. May 14 23:54:33.311477 kubelet[2591]: E0514 23:54:33.311398 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:33.358673 kubelet[2591]: E0514 23:54:33.358624 2591 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:54:33.527742 kubelet[2591]: E0514 23:54:33.527708 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:33.530410 containerd[1449]: time="2025-05-14T23:54:33.530119604Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:54:33.542095 containerd[1449]: time="2025-05-14T23:54:33.542050933Z" level=info msg="Container fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09: CDI devices from CRI Config.CDIDevices: []" May 14 23:54:33.552158 containerd[1449]: time="2025-05-14T23:54:33.552101094Z" level=info msg="CreateContainer within sandbox \"2cf6f7d18949d442491354c40c6ebc3dcd37055429e7889ceb15b51b1a298745\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\"" May 14 23:54:33.554023 containerd[1449]: time="2025-05-14T23:54:33.552695376Z" level=info msg="StartContainer for \"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\"" May 14 23:54:33.554317 containerd[1449]: time="2025-05-14T23:54:33.554278022Z" level=info msg="connecting to shim fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09" address="unix:///run/containerd/s/b59ebda8ff320c5186a04f678c5615af1f4d56e966a2ba09b43248fb4197db8d" protocol=ttrpc version=3 May 14 23:54:33.583069 systemd[1]: Started cri-containerd-fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09.scope - libcontainer container fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09. May 14 23:54:33.613547 containerd[1449]: time="2025-05-14T23:54:33.613507223Z" level=info msg="StartContainer for \"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" returns successfully" May 14 23:54:33.671519 containerd[1449]: time="2025-05-14T23:54:33.671479699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" id:\"0440f805aa5aa728df078524d46309583c0eccdd2588d3394b31971f04437467\" pid:4626 exited_at:{seconds:1747266873 nanos:670794536}" May 14 23:54:33.904903 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:54:34.533911 kubelet[2591]: E0514 23:54:34.533746 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:34.824189 kubelet[2591]: I0514 23:54:34.823748 2591 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:54:34Z","lastTransitionTime":"2025-05-14T23:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:54:35.312618 kubelet[2591]: E0514 23:54:35.312564 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:35.709974 kubelet[2591]: E0514 23:54:35.709601 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:35.974168 containerd[1449]: time="2025-05-14T23:54:35.973851398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" id:\"b4a0ebd79ef7b79e9b64ae94d0ea8bc216a163a5819b4bf908435849d7be81d4\" pid:4914 exit_status:1 exited_at:{seconds:1747266875 nanos:973444397}" May 14 23:54:36.834896 systemd-networkd[1393]: lxc_health: Link UP May 14 23:54:36.835692 systemd-networkd[1393]: lxc_health: Gained carrier May 14 23:54:37.710151 kubelet[2591]: E0514 23:54:37.710114 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:37.726920 kubelet[2591]: I0514 23:54:37.726843 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vdrl" podStartSLOduration=8.726825133 podStartE2EDuration="8.726825133s" podCreationTimestamp="2025-05-14 23:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:34.550309855 +0000 UTC m=+81.329089829" watchObservedRunningTime="2025-05-14 23:54:37.726825133 +0000 UTC m=+84.505605107" May 14 23:54:38.095736 containerd[1449]: time="2025-05-14T23:54:38.095690318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" id:\"a60fd214f283c6ad2c5b775279ab3b8104d711b016be11bae872b069a2935cef\" pid:5173 exited_at:{seconds:1747266878 nanos:95270157}" May 14 23:54:38.521344 systemd-networkd[1393]: lxc_health: Gained IPv6LL May 14 23:54:38.541309 kubelet[2591]: E0514 23:54:38.541266 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:39.543712 kubelet[2591]: E0514 23:54:39.543653 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:54:40.201952 containerd[1449]: time="2025-05-14T23:54:40.201645654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" id:\"6280b69dad04bbf26ad3fd37b1c726623567cce5ed5fd4413e714848050311b4\" pid:5201 exited_at:{seconds:1747266880 nanos:201332933}" May 14 23:54:42.324399 containerd[1449]: time="2025-05-14T23:54:42.324234669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe7e865585b05777a79b9a90aaf9a98f87936653c7846ddeaeeb0e64b12ffb09\" id:\"9a141c61a77f7748cee44bb28570e91108dc32a42f26ea9424b4a3a61d4eb9df\" pid:5232 exited_at:{seconds:1747266882 nanos:323758147}" May 14 23:54:42.336604 sshd[4358]: Connection closed by 10.0.0.1 port 33780 May 14 23:54:42.337017 sshd-session[4355]: pam_unix(sshd:session): session closed for user core May 14 23:54:42.340395 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:33780.service: Deactivated successfully. May 14 23:54:42.342729 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:54:42.344681 systemd-logind[1432]: Session 28 logged out. Waiting for processes to exit. May 14 23:54:42.345863 systemd-logind[1432]: Removed session 28.