May 8 23:50:23.920503 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 23:50:23.920533 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 8 23:50:23.920543 kernel: KASLR enabled May 8 23:50:23.920548 kernel: efi: EFI v2.7 by EDK II May 8 23:50:23.920554 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 8 23:50:23.920559 kernel: random: crng init done May 8 23:50:23.920566 kernel: secureboot: Secure boot disabled May 8 23:50:23.920572 kernel: ACPI: Early table checksum verification disabled May 8 23:50:23.920578 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 8 23:50:23.920585 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 23:50:23.920591 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920597 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920602 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920608 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920616 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920624 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920630 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920636 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920643 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 23:50:23.920649 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 23:50:23.920655 kernel: NUMA: Failed to initialise from firmware May 8 23:50:23.920661 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:50:23.920683 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 8 23:50:23.920689 kernel: Zone ranges: May 8 23:50:23.920695 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:50:23.920704 kernel: DMA32 empty May 8 23:50:23.920710 kernel: Normal empty May 8 23:50:23.920716 kernel: Movable zone start for each node May 8 23:50:23.920722 kernel: Early memory node ranges May 8 23:50:23.920728 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 8 23:50:23.920735 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 8 23:50:23.920741 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 8 23:50:23.920747 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 8 23:50:23.920753 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 8 23:50:23.920759 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 8 23:50:23.920765 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 8 23:50:23.920772 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 23:50:23.920780 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 23:50:23.920786 kernel: psci: probing for conduit method from ACPI. May 8 23:50:23.920792 kernel: psci: PSCIv1.1 detected in firmware. May 8 23:50:23.920802 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:50:23.920855 kernel: psci: Trusted OS migration not required May 8 23:50:23.920863 kernel: psci: SMC Calling Convention v1.1 May 8 23:50:23.920871 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 23:50:23.920878 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:50:23.920885 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:50:23.920891 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 23:50:23.920898 kernel: Detected PIPT I-cache on CPU0 May 8 23:50:23.920905 kernel: CPU features: detected: GIC system register CPU interface May 8 23:50:23.920912 kernel: CPU features: detected: Hardware dirty bit management May 8 23:50:23.920918 kernel: CPU features: detected: Spectre-v4 May 8 23:50:23.920925 kernel: CPU features: detected: Spectre-BHB May 8 23:50:23.920931 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 23:50:23.920939 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 23:50:23.920946 kernel: CPU features: detected: ARM erratum 1418040 May 8 23:50:23.920952 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 23:50:23.920959 kernel: alternatives: applying boot alternatives May 8 23:50:23.920966 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:50:23.920973 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:50:23.920979 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:50:23.920986 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:50:23.920993 kernel: Fallback order for Node 0: 0 May 8 23:50:23.920999 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 23:50:23.921006 kernel: Policy zone: DMA May 8 23:50:23.921014 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:50:23.921020 kernel: software IO TLB: area num 4. May 8 23:50:23.921026 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 8 23:50:23.921033 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 8 23:50:23.921040 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 23:50:23.921046 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:50:23.921053 kernel: rcu: RCU event tracing is enabled. May 8 23:50:23.921060 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 23:50:23.921067 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:50:23.921073 kernel: Tracing variant of Tasks RCU enabled. May 8 23:50:23.921080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:50:23.921087 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 23:50:23.921094 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:50:23.921101 kernel: GICv3: 256 SPIs implemented May 8 23:50:23.921107 kernel: GICv3: 0 Extended SPIs implemented May 8 23:50:23.921114 kernel: Root IRQ handler: gic_handle_irq May 8 23:50:23.921120 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 8 23:50:23.921127 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 23:50:23.921133 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 23:50:23.921140 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 8 23:50:23.921147 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 8 23:50:23.921153 kernel: GICv3: using LPI property table @0x00000000400f0000 May 8 23:50:23.921160 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 8 23:50:23.921167 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:50:23.921174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:50:23.921181 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 23:50:23.921187 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 23:50:23.921194 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 23:50:23.921201 kernel: arm-pv: using stolen time PV May 8 23:50:23.921207 kernel: Console: colour dummy device 80x25 May 8 23:50:23.921214 kernel: ACPI: Core revision 20230628 May 8 23:50:23.921221 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 23:50:23.921228 kernel: pid_max: default: 32768 minimum: 301 May 8 23:50:23.921236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:50:23.921243 kernel: landlock: Up and running. May 8 23:50:23.921249 kernel: SELinux: Initializing. May 8 23:50:23.921256 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:50:23.921263 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:50:23.921270 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 8 23:50:23.921277 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:50:23.921284 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 23:50:23.921290 kernel: rcu: Hierarchical SRCU implementation. May 8 23:50:23.921298 kernel: rcu: Max phase no-delay instances is 400. May 8 23:50:23.921305 kernel: Platform MSI: ITS@0x8080000 domain created May 8 23:50:23.921312 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 23:50:23.921319 kernel: Remapping and enabling EFI services. May 8 23:50:23.921325 kernel: smp: Bringing up secondary CPUs ... May 8 23:50:23.921332 kernel: Detected PIPT I-cache on CPU1 May 8 23:50:23.921339 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 23:50:23.921345 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 8 23:50:23.921352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:50:23.921359 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 23:50:23.921367 kernel: Detected PIPT I-cache on CPU2 May 8 23:50:23.921374 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 23:50:23.921386 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 8 23:50:23.921394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:50:23.921401 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 23:50:23.921408 kernel: Detected PIPT I-cache on CPU3 May 8 23:50:23.921415 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 23:50:23.921422 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 8 23:50:23.921429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 23:50:23.921440 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 23:50:23.921448 kernel: smp: Brought up 1 node, 4 CPUs May 8 23:50:23.921455 kernel: SMP: Total of 4 processors activated. May 8 23:50:23.921462 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:50:23.921470 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 23:50:23.921477 kernel: CPU features: detected: Common not Private translations May 8 23:50:23.921484 kernel: CPU features: detected: CRC32 instructions May 8 23:50:23.921491 kernel: CPU features: detected: Enhanced Virtualization Traps May 8 23:50:23.921500 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 23:50:23.921507 kernel: CPU features: detected: LSE atomic instructions May 8 23:50:23.921520 kernel: CPU features: detected: Privileged Access Never May 8 23:50:23.921528 kernel: CPU features: detected: RAS Extension Support May 8 23:50:23.921535 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 23:50:23.921542 kernel: CPU: All CPU(s) started at EL1 May 8 23:50:23.921549 kernel: alternatives: applying system-wide alternatives May 8 23:50:23.921556 kernel: devtmpfs: initialized May 8 23:50:23.921563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:50:23.921573 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 23:50:23.921580 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:50:23.921587 kernel: SMBIOS 3.0.0 present. May 8 23:50:23.921594 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 8 23:50:23.921601 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:50:23.921609 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:50:23.921616 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:50:23.921623 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:50:23.921630 kernel: audit: initializing netlink subsys (disabled) May 8 23:50:23.921639 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 8 23:50:23.921646 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:50:23.921653 kernel: cpuidle: using governor menu May 8 23:50:23.921660 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:50:23.921667 kernel: ASID allocator initialised with 32768 entries May 8 23:50:23.921675 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:50:23.921682 kernel: Serial: AMBA PL011 UART driver May 8 23:50:23.921689 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 8 23:50:23.921696 kernel: Modules: 0 pages in range for non-PLT usage May 8 23:50:23.921705 kernel: Modules: 508944 pages in range for PLT usage May 8 23:50:23.921712 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:50:23.921719 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:50:23.921726 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:50:23.921733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:50:23.921740 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:50:23.921747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:50:23.921754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:50:23.921761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:50:23.921770 kernel: ACPI: Added _OSI(Module Device) May 8 23:50:23.921778 kernel: ACPI: Added _OSI(Processor Device) May 8 23:50:23.921785 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:50:23.921792 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:50:23.921799 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:50:23.921806 kernel: ACPI: Interpreter enabled May 8 23:50:23.921827 kernel: ACPI: Using GIC for interrupt routing May 8 23:50:23.921834 kernel: ACPI: MCFG table detected, 1 entries May 8 23:50:23.921842 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 23:50:23.921850 kernel: printk: console [ttyAMA0] enabled May 8 23:50:23.921858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 23:50:23.921996 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 23:50:23.922091 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 23:50:23.922155 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 23:50:23.922217 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 23:50:23.922280 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 23:50:23.922293 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 23:50:23.922300 kernel: PCI host bridge to bus 0000:00 May 8 23:50:23.922371 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 23:50:23.922427 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 23:50:23.922483 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 23:50:23.922547 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 23:50:23.922625 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 23:50:23.922704 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 23:50:23.922769 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 23:50:23.922864 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 23:50:23.922930 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:50:23.922993 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 23:50:23.923056 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 23:50:23.923119 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 23:50:23.923181 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 23:50:23.923236 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 23:50:23.923292 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 23:50:23.923301 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 23:50:23.923308 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 23:50:23.923315 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 23:50:23.923322 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 23:50:23.923329 kernel: iommu: Default domain type: Translated May 8 23:50:23.923339 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:50:23.923346 kernel: efivars: Registered efivars operations May 8 23:50:23.923353 kernel: vgaarb: loaded May 8 23:50:23.923360 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:50:23.923367 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:50:23.923374 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:50:23.923381 kernel: pnp: PnP ACPI init May 8 23:50:23.923448 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 23:50:23.923460 kernel: pnp: PnP ACPI: found 1 devices May 8 23:50:23.923467 kernel: NET: Registered PF_INET protocol family May 8 23:50:23.923474 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:50:23.923482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:50:23.923489 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:50:23.923496 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:50:23.923503 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:50:23.923510 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:50:23.923525 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:50:23.923535 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:50:23.923542 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:50:23.923549 kernel: PCI: CLS 0 bytes, default 64 May 8 23:50:23.923556 kernel: kvm [1]: HYP mode not available May 8 23:50:23.923564 kernel: Initialise system trusted keyrings May 8 23:50:23.923570 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:50:23.923578 kernel: Key type asymmetric registered May 8 23:50:23.923585 kernel: Asymmetric key parser 'x509' registered May 8 23:50:23.923592 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:50:23.923600 kernel: io scheduler mq-deadline registered May 8 23:50:23.923607 kernel: io scheduler kyber registered May 8 23:50:23.923614 kernel: io scheduler bfq registered May 8 23:50:23.923621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 23:50:23.923628 kernel: ACPI: button: Power Button [PWRB] May 8 23:50:23.923636 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 23:50:23.923705 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 23:50:23.923715 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:50:23.923722 kernel: thunder_xcv, ver 1.0 May 8 23:50:23.923731 kernel: thunder_bgx, ver 1.0 May 8 23:50:23.923738 kernel: nicpf, ver 1.0 May 8 23:50:23.923745 kernel: nicvf, ver 1.0 May 8 23:50:23.923829 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:50:23.923894 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:50:23 UTC (1746748223) May 8 23:50:23.923903 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:50:23.923910 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 23:50:23.923918 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:50:23.923927 kernel: watchdog: Hard watchdog permanently disabled May 8 23:50:23.923934 kernel: NET: Registered PF_INET6 protocol family May 8 23:50:23.923941 kernel: Segment Routing with IPv6 May 8 23:50:23.923948 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:50:23.923955 kernel: NET: Registered PF_PACKET protocol family May 8 23:50:23.923963 kernel: Key type dns_resolver registered May 8 23:50:23.923970 kernel: registered taskstats version 1 May 8 23:50:23.923977 kernel: Loading compiled-in X.509 certificates May 8 23:50:23.923985 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 8 23:50:23.923993 kernel: Key type .fscrypt registered May 8 23:50:23.924001 kernel: Key type fscrypt-provisioning registered May 8 23:50:23.924008 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:50:23.924015 kernel: ima: Allocated hash algorithm: sha1 May 8 23:50:23.924022 kernel: ima: No architecture policies found May 8 23:50:23.924029 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:50:23.924036 kernel: clk: Disabling unused clocks May 8 23:50:23.924043 kernel: Freeing unused kernel memory: 39744K May 8 23:50:23.924050 kernel: Run /init as init process May 8 23:50:23.924059 kernel: with arguments: May 8 23:50:23.924065 kernel: /init May 8 23:50:23.924072 kernel: with environment: May 8 23:50:23.924079 kernel: HOME=/ May 8 23:50:23.924086 kernel: TERM=linux May 8 23:50:23.924093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:50:23.924102 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:50:23.924111 systemd[1]: Detected virtualization kvm. May 8 23:50:23.924121 systemd[1]: Detected architecture arm64. May 8 23:50:23.924128 systemd[1]: Running in initrd. May 8 23:50:23.924135 systemd[1]: No hostname configured, using default hostname. May 8 23:50:23.924142 systemd[1]: Hostname set to . May 8 23:50:23.924150 systemd[1]: Initializing machine ID from VM UUID. May 8 23:50:23.924158 systemd[1]: Queued start job for default target initrd.target. May 8 23:50:23.924166 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:50:23.924174 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:50:23.924183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:50:23.924191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:50:23.924199 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:50:23.924206 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:50:23.924215 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:50:23.924223 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:50:23.924231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:50:23.924240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:50:23.924247 systemd[1]: Reached target paths.target - Path Units. May 8 23:50:23.924255 systemd[1]: Reached target slices.target - Slice Units. May 8 23:50:23.924263 systemd[1]: Reached target swap.target - Swaps. May 8 23:50:23.924270 systemd[1]: Reached target timers.target - Timer Units. May 8 23:50:23.924278 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:50:23.924285 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:50:23.924293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:50:23.924302 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:50:23.924310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:50:23.924318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:50:23.924326 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:50:23.924333 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:50:23.924341 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:50:23.924349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:50:23.924356 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:50:23.924364 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:50:23.924373 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:50:23.924381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:50:23.924389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:50:23.924396 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:50:23.924404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:50:23.924411 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:50:23.924437 systemd-journald[240]: Collecting audit messages is disabled. May 8 23:50:23.924456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:50:23.924466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:50:23.924475 systemd-journald[240]: Journal started May 8 23:50:23.924493 systemd-journald[240]: Runtime Journal (/run/log/journal/c7b86b70f0064ad88efcd50d89a3cab1) is 5.9M, max 47.3M, 41.4M free. May 8 23:50:23.915605 systemd-modules-load[241]: Inserted module 'overlay' May 8 23:50:23.927221 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:50:23.927656 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:50:23.931821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:50:23.931975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:50:23.934835 kernel: Bridge firewalling registered May 8 23:50:23.934792 systemd-modules-load[241]: Inserted module 'br_netfilter' May 8 23:50:23.936013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:50:23.937657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:50:23.939448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:50:23.943989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:50:23.946842 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:50:23.948368 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:50:23.958103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:50:23.959358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:50:23.966979 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:50:23.969249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:50:23.976636 dracut-cmdline[278]: dracut-dracut-053 May 8 23:50:23.979042 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:50:23.997853 systemd-resolved[280]: Positive Trust Anchors: May 8 23:50:23.997924 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:50:23.997956 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:50:24.002656 systemd-resolved[280]: Defaulting to hostname 'linux'. May 8 23:50:24.003609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:50:24.007286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:50:24.051835 kernel: SCSI subsystem initialized May 8 23:50:24.054824 kernel: Loading iSCSI transport class v2.0-870. May 8 23:50:24.061834 kernel: iscsi: registered transport (tcp) May 8 23:50:24.076842 kernel: iscsi: registered transport (qla4xxx) May 8 23:50:24.076887 kernel: QLogic iSCSI HBA Driver May 8 23:50:24.117571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:50:24.124964 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:50:24.141840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:50:24.141901 kernel: device-mapper: uevent: version 1.0.3 May 8 23:50:24.141911 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:50:24.187874 kernel: raid6: neonx8 gen() 15666 MB/s May 8 23:50:24.204851 kernel: raid6: neonx4 gen() 15550 MB/s May 8 23:50:24.221835 kernel: raid6: neonx2 gen() 13221 MB/s May 8 23:50:24.238835 kernel: raid6: neonx1 gen() 10359 MB/s May 8 23:50:24.255842 kernel: raid6: int64x8 gen() 6897 MB/s May 8 23:50:24.272833 kernel: raid6: int64x4 gen() 7267 MB/s May 8 23:50:24.289834 kernel: raid6: int64x2 gen() 6073 MB/s May 8 23:50:24.307085 kernel: raid6: int64x1 gen() 5009 MB/s May 8 23:50:24.307099 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s May 8 23:50:24.324902 kernel: raid6: .... xor() 11930 MB/s, rmw enabled May 8 23:50:24.324915 kernel: raid6: using neon recovery algorithm May 8 23:50:24.329833 kernel: xor: measuring software checksum speed May 8 23:50:24.331083 kernel: 8regs : 17337 MB/sec May 8 23:50:24.331099 kernel: 32regs : 17889 MB/sec May 8 23:50:24.332343 kernel: arm64_neon : 27007 MB/sec May 8 23:50:24.332356 kernel: xor: using function: arm64_neon (27007 MB/sec) May 8 23:50:24.382792 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:50:24.393847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:50:24.406958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:50:24.418644 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 8 23:50:24.421728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:50:24.424953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:50:24.438938 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 8 23:50:24.464954 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:50:24.473985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:50:24.511524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:50:24.519039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:50:24.533868 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:50:24.535353 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:50:24.537492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:50:24.540282 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:50:24.549126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:50:24.560843 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:50:24.570285 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 8 23:50:24.570436 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 23:50:24.570505 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 23:50:24.570524 kernel: GPT:9289727 != 19775487 May 8 23:50:24.571322 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 23:50:24.571352 kernel: GPT:9289727 != 19775487 May 8 23:50:24.572323 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 23:50:24.572348 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:50:24.575967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:50:24.576089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:50:24.582728 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:50:24.583887 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:50:24.584029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:50:24.586151 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:50:24.595843 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (523) May 8 23:50:24.597857 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) May 8 23:50:24.599209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:50:24.609858 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:50:24.614552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 23:50:24.622775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 23:50:24.626618 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 23:50:24.627860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 23:50:24.634013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:50:24.652970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:50:24.654794 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:50:24.660911 disk-uuid[550]: Primary Header is updated. May 8 23:50:24.660911 disk-uuid[550]: Secondary Entries is updated. May 8 23:50:24.660911 disk-uuid[550]: Secondary Header is updated. May 8 23:50:24.664838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:50:24.673066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:50:25.676828 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 23:50:25.677585 disk-uuid[554]: The operation has completed successfully. May 8 23:50:25.697115 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:50:25.697204 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:50:25.719952 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:50:25.722571 sh[571]: Success May 8 23:50:25.736836 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:50:25.772275 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:50:25.773937 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:50:25.774931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:50:25.784314 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 8 23:50:25.784348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:50:25.785462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:50:25.786247 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:50:25.786260 kernel: BTRFS info (device dm-0): using free space tree May 8 23:50:25.790075 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:50:25.791306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:50:25.802937 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:50:25.804358 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:50:25.811370 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:50:25.811409 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:50:25.811419 kernel: BTRFS info (device vda6): using free space tree May 8 23:50:25.813829 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:50:25.820437 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:50:25.822250 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:50:25.828092 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:50:25.834965 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:50:25.890030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:50:25.902979 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:50:25.927210 ignition[664]: Ignition 2.20.0 May 8 23:50:25.927221 ignition[664]: Stage: fetch-offline May 8 23:50:25.927255 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 8 23:50:25.927263 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:25.927469 ignition[664]: parsed url from cmdline: "" May 8 23:50:25.927472 ignition[664]: no config URL provided May 8 23:50:25.927476 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:50:25.927483 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 8 23:50:25.927515 ignition[664]: op(1): [started] loading QEMU firmware config module May 8 23:50:25.927520 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 23:50:25.936929 ignition[664]: op(1): [finished] loading QEMU firmware config module May 8 23:50:25.936951 ignition[664]: QEMU firmware config was not found. Ignoring... May 8 23:50:25.942110 systemd-networkd[762]: lo: Link UP May 8 23:50:25.942119 systemd-networkd[762]: lo: Gained carrier May 8 23:50:25.943114 systemd-networkd[762]: Enumeration completed May 8 23:50:25.943211 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:50:25.943674 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:50:25.943677 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:50:25.944947 systemd-networkd[762]: eth0: Link UP May 8 23:50:25.944950 systemd-networkd[762]: eth0: Gained carrier May 8 23:50:25.944956 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:50:25.945288 systemd[1]: Reached target network.target - Network. May 8 23:50:25.966865 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:50:25.985070 ignition[664]: parsing config with SHA512: dace2309fc6e1641e1f4302c8a5b4f9f306851864061c9b52d09533b196892d6d5aa8a650861fca2c4fec5d36a2c73fcc4e296afff2963d2c77bc501231c7ae1 May 8 23:50:25.989857 unknown[664]: fetched base config from "system" May 8 23:50:25.989868 unknown[664]: fetched user config from "qemu" May 8 23:50:25.990277 ignition[664]: fetch-offline: fetch-offline passed May 8 23:50:25.992221 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:50:25.990351 ignition[664]: Ignition finished successfully May 8 23:50:25.993456 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 23:50:26.000031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:50:26.009973 ignition[771]: Ignition 2.20.0 May 8 23:50:26.009982 ignition[771]: Stage: kargs May 8 23:50:26.010133 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 8 23:50:26.010142 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:26.011050 ignition[771]: kargs: kargs passed May 8 23:50:26.011090 ignition[771]: Ignition finished successfully May 8 23:50:26.014705 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:50:26.026954 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:50:26.035761 ignition[780]: Ignition 2.20.0 May 8 23:50:26.035772 ignition[780]: Stage: disks May 8 23:50:26.035951 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 8 23:50:26.035961 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:26.036858 ignition[780]: disks: disks passed May 8 23:50:26.039079 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:50:26.036904 ignition[780]: Ignition finished successfully May 8 23:50:26.040609 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:50:26.042246 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:50:26.043941 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:50:26.045757 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:50:26.047751 systemd[1]: Reached target basic.target - Basic System. May 8 23:50:26.058953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:50:26.068248 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 23:50:26.071545 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:50:26.074286 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:50:26.116836 kernel: EXT4-fs (vda9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 8 23:50:26.117444 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:50:26.118719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:50:26.145929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:50:26.147672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:50:26.149608 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 23:50:26.149657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:50:26.149730 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:50:26.157492 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) May 8 23:50:26.154054 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:50:26.161738 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:50:26.161759 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:50:26.161769 kernel: BTRFS info (device vda6): using free space tree May 8 23:50:26.157354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:50:26.165032 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:50:26.165958 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:50:26.206901 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:50:26.209854 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 8 23:50:26.213511 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:50:26.216173 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:50:26.288237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:50:26.299896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:50:26.302571 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:50:26.305849 kernel: BTRFS info (device vda6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:50:26.324310 ignition[914]: INFO : Ignition 2.20.0 May 8 23:50:26.324310 ignition[914]: INFO : Stage: mount May 8 23:50:26.325929 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:50:26.325929 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:26.325929 ignition[914]: INFO : mount: mount passed May 8 23:50:26.325929 ignition[914]: INFO : Ignition finished successfully May 8 23:50:26.325561 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:50:26.328073 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:50:26.336935 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:50:26.783382 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:50:26.796006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:50:26.802781 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) May 8 23:50:26.802817 kernel: BTRFS info (device vda6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:50:26.802834 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 23:50:26.804819 kernel: BTRFS info (device vda6): using free space tree May 8 23:50:26.806821 kernel: BTRFS info (device vda6): auto enabling async discard May 8 23:50:26.807651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:50:26.823783 ignition[946]: INFO : Ignition 2.20.0 May 8 23:50:26.823783 ignition[946]: INFO : Stage: files May 8 23:50:26.825327 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:50:26.825327 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:26.825327 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 8 23:50:26.828835 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:50:26.828835 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:50:26.828835 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:50:26.828835 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:50:26.828835 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:50:26.828019 unknown[946]: wrote ssh authorized keys file for user: core May 8 23:50:26.836143 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:50:26.836143 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 23:50:27.235019 systemd-networkd[762]: eth0: Gained IPv6LL May 8 23:50:28.805997 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 23:50:32.350482 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 23:50:32.350482 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:50:32.354328 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:50:32.758871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:50:32.981256 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:50:32.981256 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 23:50:32.984951 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 8 23:50:33.313398 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 23:50:34.317304 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 23:50:34.317304 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 23:50:34.321152 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 8 23:50:34.344729 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:50:34.348757 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 23:50:34.350387 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 8 23:50:34.350387 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 8 23:50:34.350387 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:50:34.350387 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:50:34.350387 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:50:34.350387 ignition[946]: INFO : files: files passed May 8 23:50:34.350387 ignition[946]: INFO : Ignition finished successfully May 8 23:50:34.355100 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:50:34.370961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:50:34.373577 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:50:34.375185 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:50:34.376841 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:50:34.381722 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 8 23:50:34.385218 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:50:34.385218 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:50:34.388393 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:50:34.387576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:50:34.390017 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:50:34.398953 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:50:34.418546 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:50:34.419632 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:50:34.421104 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:50:34.422913 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:50:34.424347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:50:34.425158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:50:34.441483 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:50:34.443961 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:50:34.455340 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:50:34.456702 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:50:34.458900 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:50:34.460717 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:50:34.460856 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:50:34.463368 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:50:34.465371 systemd[1]: Stopped target basic.target - Basic System. May 8 23:50:34.467030 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:50:34.468754 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:50:34.470723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:50:34.472834 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:50:34.474782 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:50:34.476779 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:50:34.478768 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:50:34.480500 systemd[1]: Stopped target swap.target - Swaps. May 8 23:50:34.482015 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:50:34.482134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:50:34.484584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:50:34.486557 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:50:34.488483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:50:34.491870 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:50:34.493116 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:50:34.493230 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:50:34.496087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:50:34.496205 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:50:34.498162 systemd[1]: Stopped target paths.target - Path Units. May 8 23:50:34.499699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:50:34.502859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:50:34.504154 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:50:34.506248 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:50:34.507783 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:50:34.507904 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:50:34.509412 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:50:34.509502 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:50:34.511045 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:50:34.511150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:50:34.512907 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:50:34.513012 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:50:34.525961 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:50:34.526855 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:50:34.526984 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:50:34.530134 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:50:34.531674 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:50:34.531794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:50:34.534366 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:50:34.534534 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:50:34.539130 ignition[1001]: INFO : Ignition 2.20.0 May 8 23:50:34.539130 ignition[1001]: INFO : Stage: umount May 8 23:50:34.539130 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:50:34.539130 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 23:50:34.544896 ignition[1001]: INFO : umount: umount passed May 8 23:50:34.544896 ignition[1001]: INFO : Ignition finished successfully May 8 23:50:34.540227 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:50:34.540322 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:50:34.542253 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:50:34.542340 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:50:34.544259 systemd[1]: Stopped target network.target - Network. May 8 23:50:34.545933 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:50:34.545995 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:50:34.547551 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:50:34.547593 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:50:34.549189 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:50:34.549228 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:50:34.550804 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:50:34.550860 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:50:34.552965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:50:34.554680 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:50:34.557206 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:50:34.557663 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:50:34.557745 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:50:34.557836 systemd-networkd[762]: eth0: DHCPv6 lease lost May 8 23:50:34.559181 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:50:34.559277 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:50:34.561144 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:50:34.561243 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:50:34.564641 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:50:34.564691 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:50:34.566598 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:50:34.566648 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:50:34.573901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:50:34.575069 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:50:34.575126 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:50:34.577038 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:50:34.577082 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:50:34.578802 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:50:34.578858 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:50:34.580611 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:50:34.580653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:50:34.582724 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:50:34.592097 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:50:34.592186 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:50:34.596436 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:50:34.596568 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:50:34.598448 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:50:34.598498 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:50:34.600223 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:50:34.600255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:50:34.602129 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:50:34.602171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:50:34.605022 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:50:34.605065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:50:34.607968 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:50:34.608008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:50:34.622016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:50:34.623212 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:50:34.623272 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:50:34.625528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:50:34.625573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:50:34.627773 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:50:34.627864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:50:34.630251 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:50:34.632459 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:50:34.642143 systemd[1]: Switching root. May 8 23:50:34.669833 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 8 23:50:34.669880 systemd-journald[240]: Journal stopped May 8 23:50:35.415358 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:50:35.415408 kernel: SELinux: policy capability open_perms=1 May 8 23:50:35.415423 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:50:35.415433 kernel: SELinux: policy capability always_check_network=0 May 8 23:50:35.415443 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:50:35.415467 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:50:35.415492 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:50:35.415502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:50:35.415513 kernel: audit: type=1403 audit(1746748234.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:50:35.415527 systemd[1]: Successfully loaded SELinux policy in 30.707ms. May 8 23:50:35.415547 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.764ms. May 8 23:50:35.415560 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:50:35.415572 systemd[1]: Detected virtualization kvm. May 8 23:50:35.415582 systemd[1]: Detected architecture arm64. May 8 23:50:35.415593 systemd[1]: Detected first boot. May 8 23:50:35.415603 systemd[1]: Initializing machine ID from VM UUID. May 8 23:50:35.415613 zram_generator::config[1048]: No configuration found. May 8 23:50:35.415625 systemd[1]: Populated /etc with preset unit settings. May 8 23:50:35.415635 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 23:50:35.415647 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 23:50:35.415658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 23:50:35.415668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:50:35.415679 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:50:35.415692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:50:35.415702 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:50:35.415713 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:50:35.415723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:50:35.415733 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:50:35.415746 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:50:35.415756 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:50:35.415767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:50:35.415778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:50:35.415789 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:50:35.415799 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:50:35.415829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:50:35.415843 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 8 23:50:35.415856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:50:35.415866 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 23:50:35.415877 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 23:50:35.415886 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 23:50:35.415897 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:50:35.415912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:50:35.415923 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:50:35.415933 systemd[1]: Reached target slices.target - Slice Units. May 8 23:50:35.415946 systemd[1]: Reached target swap.target - Swaps. May 8 23:50:35.415957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:50:35.415968 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:50:35.415978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:50:35.415988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:50:35.415999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:50:35.416009 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:50:35.416019 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:50:35.416029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:50:35.416041 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:50:35.416051 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:50:35.416062 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:50:35.416072 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:50:35.416083 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:50:35.416093 systemd[1]: Reached target machines.target - Containers. May 8 23:50:35.416104 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:50:35.416114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:50:35.416124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:50:35.416136 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:50:35.416147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:50:35.416157 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:50:35.416168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:50:35.416178 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:50:35.416188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:50:35.416199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:50:35.416209 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 23:50:35.416221 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 23:50:35.416231 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 23:50:35.416241 systemd[1]: Stopped systemd-fsck-usr.service. May 8 23:50:35.416251 kernel: fuse: init (API version 7.39) May 8 23:50:35.416261 kernel: ACPI: bus type drm_connector registered May 8 23:50:35.416270 kernel: loop: module loaded May 8 23:50:35.416280 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:50:35.416290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:50:35.416300 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:50:35.416312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:50:35.416344 systemd-journald[1115]: Collecting audit messages is disabled. May 8 23:50:35.416372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:50:35.416383 systemd-journald[1115]: Journal started May 8 23:50:35.416408 systemd-journald[1115]: Runtime Journal (/run/log/journal/c7b86b70f0064ad88efcd50d89a3cab1) is 5.9M, max 47.3M, 41.4M free. May 8 23:50:35.223740 systemd[1]: Queued start job for default target multi-user.target. May 8 23:50:35.237565 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 23:50:35.237933 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 23:50:35.420530 systemd[1]: verity-setup.service: Deactivated successfully. May 8 23:50:35.420568 systemd[1]: Stopped verity-setup.service. May 8 23:50:35.424006 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:50:35.424702 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:50:35.426044 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:50:35.427293 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:50:35.428446 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:50:35.429701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:50:35.430975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:50:35.433847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:50:35.435254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:50:35.436766 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:50:35.436930 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:50:35.438390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:50:35.438541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:50:35.440009 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:50:35.440132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:50:35.441523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:50:35.441663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:50:35.443269 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:50:35.443403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:50:35.444754 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:50:35.444936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:50:35.446343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:50:35.447721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:50:35.449418 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:50:35.461756 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:50:35.471947 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:50:35.474151 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:50:35.475261 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:50:35.475301 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:50:35.477273 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:50:35.479533 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:50:35.481645 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:50:35.482878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:50:35.484116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:50:35.486018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:50:35.487280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:50:35.488982 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:50:35.490150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:50:35.493989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:50:35.497057 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:50:35.499718 systemd-journald[1115]: Time spent on flushing to /var/log/journal/c7b86b70f0064ad88efcd50d89a3cab1 is 17.984ms for 860 entries. May 8 23:50:35.499718 systemd-journald[1115]: System Journal (/var/log/journal/c7b86b70f0064ad88efcd50d89a3cab1) is 8.0M, max 195.6M, 187.6M free. May 8 23:50:35.541753 systemd-journald[1115]: Received client request to flush runtime journal. May 8 23:50:35.541834 kernel: loop0: detected capacity change from 0 to 189592 May 8 23:50:35.541859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:50:35.501493 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:50:35.504174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:50:35.505642 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:50:35.507148 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:50:35.508572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:50:35.510143 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:50:35.516540 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:50:35.527983 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:50:35.530968 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:50:35.534169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:50:35.548481 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:50:35.549728 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:50:35.553753 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:50:35.557329 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 23:50:35.566860 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:50:35.575008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:50:35.581846 kernel: loop1: detected capacity change from 0 to 113536 May 8 23:50:35.597350 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 8 23:50:35.597371 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 8 23:50:35.601540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:50:35.620118 kernel: loop2: detected capacity change from 0 to 116808 May 8 23:50:35.655837 kernel: loop3: detected capacity change from 0 to 189592 May 8 23:50:35.662840 kernel: loop4: detected capacity change from 0 to 113536 May 8 23:50:35.670829 kernel: loop5: detected capacity change from 0 to 116808 May 8 23:50:35.673437 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 23:50:35.673901 (sd-merge)[1184]: Merged extensions into '/usr'. May 8 23:50:35.677790 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:50:35.677850 systemd[1]: Reloading... May 8 23:50:35.740526 zram_generator::config[1215]: No configuration found. May 8 23:50:35.766097 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:50:35.824171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:50:35.859251 systemd[1]: Reloading finished in 181 ms. May 8 23:50:35.894192 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:50:35.895718 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:50:35.912986 systemd[1]: Starting ensure-sysext.service... May 8 23:50:35.914990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:50:35.932240 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... May 8 23:50:35.932258 systemd[1]: Reloading... May 8 23:50:35.945017 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:50:35.945316 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:50:35.946071 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:50:35.946333 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 23:50:35.946392 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 23:50:35.949086 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:50:35.949097 systemd-tmpfiles[1246]: Skipping /boot May 8 23:50:35.956096 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:50:35.956105 systemd-tmpfiles[1246]: Skipping /boot May 8 23:50:35.980854 zram_generator::config[1270]: No configuration found. May 8 23:50:36.067385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:50:36.102385 systemd[1]: Reloading finished in 169 ms. May 8 23:50:36.120869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:50:36.129196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:50:36.138968 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:50:36.141213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:50:36.143506 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:50:36.149146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:50:36.158164 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:50:36.162999 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:50:36.166443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:50:36.167705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:50:36.173108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:50:36.175333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:50:36.177862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:50:36.179857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:50:36.181747 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:50:36.183388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:50:36.183565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:50:36.187660 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:50:36.190357 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:50:36.192013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:50:36.192132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:50:36.197693 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:50:36.201178 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:50:36.202669 systemd-udevd[1314]: Using default interface naming scheme 'v255'. May 8 23:50:36.205505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:50:36.218130 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:50:36.221823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:50:36.224777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:50:36.229063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:50:36.230243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:50:36.232040 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:50:36.234048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:50:36.234834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:50:36.237358 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:50:36.240635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:50:36.240787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:50:36.244433 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:50:36.244605 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:50:36.250509 systemd[1]: Finished ensure-sysext.service. May 8 23:50:36.269050 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:50:36.275463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 23:50:36.275834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1349) May 8 23:50:36.277457 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 8 23:50:36.281963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:50:36.282859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:50:36.292750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:50:36.293132 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:50:36.293290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:50:36.294579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:50:36.296292 augenrules[1381]: No rules May 8 23:50:36.300938 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:50:36.303082 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:50:36.303227 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:50:36.310895 systemd-resolved[1313]: Positive Trust Anchors: May 8 23:50:36.310973 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:50:36.311005 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:50:36.318384 systemd-resolved[1313]: Defaulting to hostname 'linux'. May 8 23:50:36.319782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:50:36.321135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:50:36.345513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 23:50:36.351052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:50:36.357705 systemd-networkd[1376]: lo: Link UP May 8 23:50:36.357716 systemd-networkd[1376]: lo: Gained carrier May 8 23:50:36.358566 systemd-networkd[1376]: Enumeration completed May 8 23:50:36.358690 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:50:36.360388 systemd[1]: Reached target network.target - Network. May 8 23:50:36.362392 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:50:36.362395 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:50:36.362968 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:50:36.362993 systemd-networkd[1376]: eth0: Link UP May 8 23:50:36.362997 systemd-networkd[1376]: eth0: Gained carrier May 8 23:50:36.363005 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:50:36.373989 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:50:36.375270 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 23:50:36.377022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:50:36.379613 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:50:36.382975 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 23:50:36.386938 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 8 23:50:35.920090 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 23:50:35.925968 systemd-journald[1115]: Time jumped backwards, rotating. May 8 23:50:35.920288 systemd-timesyncd[1377]: Initial clock synchronization to Thu 2025-05-08 23:50:35.919776 UTC. May 8 23:50:35.920901 systemd-resolved[1313]: Clock change detected. Flushing caches. May 8 23:50:35.928946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:50:35.943895 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:50:35.959016 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:50:35.970926 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:50:35.976076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:50:36.003266 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:50:36.004722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:50:36.005862 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:50:36.006996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:50:36.008241 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:50:36.009628 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:50:36.010775 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:50:36.012122 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:50:36.013310 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:50:36.013350 systemd[1]: Reached target paths.target - Path Units. May 8 23:50:36.014233 systemd[1]: Reached target timers.target - Timer Units. May 8 23:50:36.015778 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:50:36.018198 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:50:36.032886 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:50:36.034950 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:50:36.036429 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:50:36.037620 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:50:36.038574 systemd[1]: Reached target basic.target - Basic System. May 8 23:50:36.039538 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:50:36.039572 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:50:36.040442 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:50:36.042345 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:50:36.042420 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:50:36.045420 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:50:36.050952 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:50:36.051948 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:50:36.052929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:50:36.057521 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 23:50:36.061577 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:50:36.065089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:50:36.065763 jq[1416]: false May 8 23:50:36.075446 extend-filesystems[1417]: Found loop3 May 8 23:50:36.075446 extend-filesystems[1417]: Found loop4 May 8 23:50:36.075446 extend-filesystems[1417]: Found loop5 May 8 23:50:36.075446 extend-filesystems[1417]: Found vda May 8 23:50:36.075446 extend-filesystems[1417]: Found vda1 May 8 23:50:36.083502 extend-filesystems[1417]: Found vda2 May 8 23:50:36.083502 extend-filesystems[1417]: Found vda3 May 8 23:50:36.083502 extend-filesystems[1417]: Found usr May 8 23:50:36.083502 extend-filesystems[1417]: Found vda4 May 8 23:50:36.083502 extend-filesystems[1417]: Found vda6 May 8 23:50:36.083502 extend-filesystems[1417]: Found vda7 May 8 23:50:36.083502 extend-filesystems[1417]: Found vda9 May 8 23:50:36.083502 extend-filesystems[1417]: Checking size of /dev/vda9 May 8 23:50:36.083088 dbus-daemon[1415]: [system] SELinux support is enabled May 8 23:50:36.076419 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:50:36.080570 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:50:36.081036 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:50:36.082000 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:50:36.086115 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:50:36.088427 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:50:36.092878 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:50:36.098568 extend-filesystems[1417]: Resized partition /dev/vda9 May 8 23:50:36.103846 jq[1434]: true May 8 23:50:36.105223 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:50:36.105417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:50:36.105693 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:50:36.105866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1355) May 8 23:50:36.105894 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:50:36.113668 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:50:36.113821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:50:36.126214 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) May 8 23:50:36.129166 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 23:50:36.130459 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:50:36.140763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:50:36.140801 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:50:36.144075 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:50:36.144100 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:50:36.147546 update_engine[1432]: I20250508 23:50:36.146925 1432 main.cc:92] Flatcar Update Engine starting May 8 23:50:36.150386 systemd[1]: Started update-engine.service - Update Engine. May 8 23:50:36.150693 update_engine[1432]: I20250508 23:50:36.150447 1432 update_check_scheduler.cc:74] Next update check in 6m52s May 8 23:50:36.150717 tar[1440]: linux-arm64/helm May 8 23:50:36.166710 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 23:50:36.166740 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 23:50:36.166740 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 23:50:36.166740 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 23:50:36.172976 jq[1442]: true May 8 23:50:36.159008 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:50:36.173142 extend-filesystems[1417]: Resized filesystem in /dev/vda9 May 8 23:50:36.164899 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:50:36.165181 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:50:36.204106 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) May 8 23:50:36.204684 systemd-logind[1428]: New seat seat0. May 8 23:50:36.205881 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:50:36.207208 bash[1471]: Updated "/home/core/.ssh/authorized_keys" May 8 23:50:36.212876 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:50:36.216214 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 23:50:36.250618 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:50:36.337109 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:50:36.339262 containerd[1443]: time="2025-05-08T23:50:36.338123714Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 23:50:36.356513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:50:36.367164 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:50:36.368202 containerd[1443]: time="2025-05-08T23:50:36.368128634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.369733 containerd[1443]: time="2025-05-08T23:50:36.369672794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:50:36.369733 containerd[1443]: time="2025-05-08T23:50:36.369709914Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:50:36.369733 containerd[1443]: time="2025-05-08T23:50:36.369727234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:50:36.369920 containerd[1443]: time="2025-05-08T23:50:36.369901194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:50:36.369946 containerd[1443]: time="2025-05-08T23:50:36.369924474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370003 containerd[1443]: time="2025-05-08T23:50:36.369983674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:50:36.370003 containerd[1443]: time="2025-05-08T23:50:36.369999474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370373 containerd[1443]: time="2025-05-08T23:50:36.370155034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:50:36.370373 containerd[1443]: time="2025-05-08T23:50:36.370174034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370373 containerd[1443]: time="2025-05-08T23:50:36.370186794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:50:36.370373 containerd[1443]: time="2025-05-08T23:50:36.370195794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370373 containerd[1443]: time="2025-05-08T23:50:36.370261234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370524 containerd[1443]: time="2025-05-08T23:50:36.370434634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:50:36.370637 containerd[1443]: time="2025-05-08T23:50:36.370536034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:50:36.370637 containerd[1443]: time="2025-05-08T23:50:36.370555594Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:50:36.370637 containerd[1443]: time="2025-05-08T23:50:36.370628714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:50:36.370706 containerd[1443]: time="2025-05-08T23:50:36.370666554Z" level=info msg="metadata content store policy set" policy=shared May 8 23:50:36.372503 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:50:36.372727 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:50:36.374419 containerd[1443]: time="2025-05-08T23:50:36.374357074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:50:36.374419 containerd[1443]: time="2025-05-08T23:50:36.374411754Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:50:36.374504 containerd[1443]: time="2025-05-08T23:50:36.374430794Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:50:36.374504 containerd[1443]: time="2025-05-08T23:50:36.374452354Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:50:36.374504 containerd[1443]: time="2025-05-08T23:50:36.374468554Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:50:36.374681 containerd[1443]: time="2025-05-08T23:50:36.374631474Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:50:36.375063 containerd[1443]: time="2025-05-08T23:50:36.375022794Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:50:36.375281 containerd[1443]: time="2025-05-08T23:50:36.375259634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:50:36.375347 containerd[1443]: time="2025-05-08T23:50:36.375334274Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:50:36.375413 containerd[1443]: time="2025-05-08T23:50:36.375399194Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:50:36.375467 containerd[1443]: time="2025-05-08T23:50:36.375454714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:50:36.375531 containerd[1443]: time="2025-05-08T23:50:36.375517914Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:50:36.375581 containerd[1443]: time="2025-05-08T23:50:36.375569754Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:50:36.375640 containerd[1443]: time="2025-05-08T23:50:36.375628474Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:50:36.375707 containerd[1443]: time="2025-05-08T23:50:36.375684634Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:50:36.375726 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:50:36.375960 containerd[1443]: time="2025-05-08T23:50:36.375938754Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:50:36.376027 containerd[1443]: time="2025-05-08T23:50:36.376014354Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:50:36.376148 containerd[1443]: time="2025-05-08T23:50:36.376132354Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:50:36.376225 containerd[1443]: time="2025-05-08T23:50:36.376211074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376277 containerd[1443]: time="2025-05-08T23:50:36.376265754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376348 containerd[1443]: time="2025-05-08T23:50:36.376334154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376454 containerd[1443]: time="2025-05-08T23:50:36.376438874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376523 containerd[1443]: time="2025-05-08T23:50:36.376509634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376577 containerd[1443]: time="2025-05-08T23:50:36.376564234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376636 containerd[1443]: time="2025-05-08T23:50:36.376623474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376688 containerd[1443]: time="2025-05-08T23:50:36.376674714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376738 containerd[1443]: time="2025-05-08T23:50:36.376726154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376792 containerd[1443]: time="2025-05-08T23:50:36.376781034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:50:36.376876 containerd[1443]: time="2025-05-08T23:50:36.376853194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:50:36.377038 containerd[1443]: time="2025-05-08T23:50:36.376918554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:50:36.377114 containerd[1443]: time="2025-05-08T23:50:36.377089274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:50:36.377295 containerd[1443]: time="2025-05-08T23:50:36.377276994Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:50:36.377369 containerd[1443]: time="2025-05-08T23:50:36.377356354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:50:36.377424 containerd[1443]: time="2025-05-08T23:50:36.377412594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:50:36.377473 containerd[1443]: time="2025-05-08T23:50:36.377462594Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:50:36.377719 containerd[1443]: time="2025-05-08T23:50:36.377704994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:50:36.377927 containerd[1443]: time="2025-05-08T23:50:36.377906874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:50:36.378008 containerd[1443]: time="2025-05-08T23:50:36.377993514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:50:36.378061 containerd[1443]: time="2025-05-08T23:50:36.378048194Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:50:36.378108 containerd[1443]: time="2025-05-08T23:50:36.378093914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:50:36.378159 containerd[1443]: time="2025-05-08T23:50:36.378147474Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:50:36.378209 containerd[1443]: time="2025-05-08T23:50:36.378198194Z" level=info msg="NRI interface is disabled by configuration." May 8 23:50:36.378256 containerd[1443]: time="2025-05-08T23:50:36.378245114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:50:36.378675 containerd[1443]: time="2025-05-08T23:50:36.378620914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:50:36.378875 containerd[1443]: time="2025-05-08T23:50:36.378856274Z" level=info msg="Connect containerd service" May 8 23:50:36.378964 containerd[1443]: time="2025-05-08T23:50:36.378950514Z" level=info msg="using legacy CRI server" May 8 23:50:36.379013 containerd[1443]: time="2025-05-08T23:50:36.378999914Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:50:36.379285 containerd[1443]: time="2025-05-08T23:50:36.379270074Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:50:36.381055 containerd[1443]: time="2025-05-08T23:50:36.381025554Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381327074Z" level=info msg="Start subscribing containerd event" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381383674Z" level=info msg="Start recovering state" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381450674Z" level=info msg="Start event monitor" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381461354Z" level=info msg="Start snapshots syncer" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381470434Z" level=info msg="Start cni network conf syncer for default" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381476954Z" level=info msg="Start streaming server" May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381686074Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381729794Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:50:36.381795 containerd[1443]: time="2025-05-08T23:50:36.381775114Z" level=info msg="containerd successfully booted in 0.044576s" May 8 23:50:36.381859 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:50:36.391173 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:50:36.394549 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:50:36.397371 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 8 23:50:36.398857 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:50:36.524577 tar[1440]: linux-arm64/LICENSE May 8 23:50:36.524772 tar[1440]: linux-arm64/README.md May 8 23:50:36.537225 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 23:50:37.390993 systemd-networkd[1376]: eth0: Gained IPv6LL May 8 23:50:37.393585 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:50:37.395351 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:50:37.413078 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 23:50:37.415476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:50:37.417599 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:50:37.433025 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 23:50:37.433327 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 23:50:37.435151 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:50:37.439825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:50:37.892830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:50:37.894384 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:50:37.896817 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:50:37.898924 systemd[1]: Startup finished in 564ms (kernel) + 11.182s (initrd) + 3.513s (userspace) = 15.260s. May 8 23:50:38.376370 kubelet[1528]: E0508 23:50:38.376273 1528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:50:38.378961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:50:38.379108 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:50:45.996097 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:50:45.997247 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). May 8 23:50:46.069818 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.071473 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.083507 systemd-logind[1428]: New session 1 of user core. May 8 23:50:46.084520 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:50:46.097042 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:50:46.107505 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:50:46.109699 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:50:46.116088 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:50:46.187071 systemd[1546]: Queued start job for default target default.target. May 8 23:50:46.202821 systemd[1546]: Created slice app.slice - User Application Slice. May 8 23:50:46.202873 systemd[1546]: Reached target paths.target - Paths. May 8 23:50:46.202886 systemd[1546]: Reached target timers.target - Timers. May 8 23:50:46.204145 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:50:46.213551 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:50:46.213613 systemd[1546]: Reached target sockets.target - Sockets. May 8 23:50:46.213625 systemd[1546]: Reached target basic.target - Basic System. May 8 23:50:46.213661 systemd[1546]: Reached target default.target - Main User Target. May 8 23:50:46.213688 systemd[1546]: Startup finished in 92ms. May 8 23:50:46.213964 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:50:46.215421 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:50:46.273930 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:44802.service - OpenSSH per-connection server daemon (10.0.0.1:44802). May 8 23:50:46.318636 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 44802 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.319805 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.324442 systemd-logind[1428]: New session 2 of user core. May 8 23:50:46.329980 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:50:46.380790 sshd[1559]: Connection closed by 10.0.0.1 port 44802 May 8 23:50:46.381279 sshd-session[1557]: pam_unix(sshd:session): session closed for user core May 8 23:50:46.392177 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:44802.service: Deactivated successfully. May 8 23:50:46.393445 systemd[1]: session-2.scope: Deactivated successfully. May 8 23:50:46.395982 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. May 8 23:50:46.397094 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:44804.service - OpenSSH per-connection server daemon (10.0.0.1:44804). May 8 23:50:46.398213 systemd-logind[1428]: Removed session 2. May 8 23:50:46.441036 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 44804 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.442172 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.446094 systemd-logind[1428]: New session 3 of user core. May 8 23:50:46.455976 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:50:46.502686 sshd[1566]: Connection closed by 10.0.0.1 port 44804 May 8 23:50:46.503033 sshd-session[1564]: pam_unix(sshd:session): session closed for user core May 8 23:50:46.521309 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:44804.service: Deactivated successfully. May 8 23:50:46.522680 systemd[1]: session-3.scope: Deactivated successfully. May 8 23:50:46.523868 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. May 8 23:50:46.524981 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:44806.service - OpenSSH per-connection server daemon (10.0.0.1:44806). May 8 23:50:46.525821 systemd-logind[1428]: Removed session 3. May 8 23:50:46.569479 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 44806 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.570642 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.574616 systemd-logind[1428]: New session 4 of user core. May 8 23:50:46.585024 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:50:46.636612 sshd[1573]: Connection closed by 10.0.0.1 port 44806 May 8 23:50:46.636996 sshd-session[1571]: pam_unix(sshd:session): session closed for user core May 8 23:50:46.648102 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:44806.service: Deactivated successfully. May 8 23:50:46.650107 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:50:46.651345 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. May 8 23:50:46.654464 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:44812.service - OpenSSH per-connection server daemon (10.0.0.1:44812). May 8 23:50:46.655301 systemd-logind[1428]: Removed session 4. May 8 23:50:46.698333 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 44812 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.699495 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.703877 systemd-logind[1428]: New session 5 of user core. May 8 23:50:46.713480 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:50:46.772467 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:50:46.772732 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:50:46.787623 sudo[1581]: pam_unix(sudo:session): session closed for user root May 8 23:50:46.788879 sshd[1580]: Connection closed by 10.0.0.1 port 44812 May 8 23:50:46.789389 sshd-session[1578]: pam_unix(sshd:session): session closed for user core May 8 23:50:46.801255 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:44812.service: Deactivated successfully. May 8 23:50:46.802650 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:50:46.804917 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. May 8 23:50:46.805281 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:44818.service - OpenSSH per-connection server daemon (10.0.0.1:44818). May 8 23:50:46.806427 systemd-logind[1428]: Removed session 5. May 8 23:50:46.849699 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 44818 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:46.850993 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:46.854905 systemd-logind[1428]: New session 6 of user core. May 8 23:50:46.863989 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:50:46.916043 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:50:46.916327 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:50:46.919352 sudo[1590]: pam_unix(sudo:session): session closed for user root May 8 23:50:46.923757 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 23:50:46.924289 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:50:46.948179 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:50:46.971136 augenrules[1612]: No rules May 8 23:50:46.972343 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:50:46.972560 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:50:46.974711 sudo[1589]: pam_unix(sudo:session): session closed for user root May 8 23:50:46.975890 sshd[1588]: Connection closed by 10.0.0.1 port 44818 May 8 23:50:46.976169 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 8 23:50:46.986222 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:44818.service: Deactivated successfully. May 8 23:50:46.987693 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:50:46.988961 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. May 8 23:50:46.990137 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:44830.service - OpenSSH per-connection server daemon (10.0.0.1:44830). May 8 23:50:46.990854 systemd-logind[1428]: Removed session 6. May 8 23:50:47.035081 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 44830 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:50:47.036294 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:50:47.040445 systemd-logind[1428]: New session 7 of user core. May 8 23:50:47.050015 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:50:47.100682 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:50:47.101311 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:50:47.434206 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 23:50:47.434370 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 23:50:47.688161 dockerd[1643]: time="2025-05-08T23:50:47.688035714Z" level=info msg="Starting up" May 8 23:50:47.781132 dockerd[1643]: time="2025-05-08T23:50:47.781062834Z" level=info msg="Loading containers: start." May 8 23:50:47.921861 kernel: Initializing XFRM netlink socket May 8 23:50:47.985581 systemd-networkd[1376]: docker0: Link UP May 8 23:50:48.016046 dockerd[1643]: time="2025-05-08T23:50:48.015983874Z" level=info msg="Loading containers: done." May 8 23:50:48.026486 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3488637370-merged.mount: Deactivated successfully. May 8 23:50:48.028233 dockerd[1643]: time="2025-05-08T23:50:48.027832674Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 23:50:48.028233 dockerd[1643]: time="2025-05-08T23:50:48.027952394Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 8 23:50:48.028233 dockerd[1643]: time="2025-05-08T23:50:48.028056034Z" level=info msg="Daemon has completed initialization" May 8 23:50:48.054833 dockerd[1643]: time="2025-05-08T23:50:48.054784354Z" level=info msg="API listen on /run/docker.sock" May 8 23:50:48.054962 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 23:50:48.629350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 23:50:48.639046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:50:48.736818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:50:48.740088 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:50:48.775195 kubelet[1847]: E0508 23:50:48.775144 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:50:48.778290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:50:48.778428 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:50:48.902411 containerd[1443]: time="2025-05-08T23:50:48.902205834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 23:50:49.702747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571362768.mount: Deactivated successfully. May 8 23:50:51.136151 containerd[1443]: time="2025-05-08T23:50:51.136090754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:51.136560 containerd[1443]: time="2025-05-08T23:50:51.136461434Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 8 23:50:51.137322 containerd[1443]: time="2025-05-08T23:50:51.137293514Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:51.140245 containerd[1443]: time="2025-05-08T23:50:51.140212474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:51.141507 containerd[1443]: time="2025-05-08T23:50:51.141468554Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.23922008s" May 8 23:50:51.141546 containerd[1443]: time="2025-05-08T23:50:51.141507034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 8 23:50:51.142245 containerd[1443]: time="2025-05-08T23:50:51.142208794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 23:50:52.690846 containerd[1443]: time="2025-05-08T23:50:52.690785434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:52.691403 containerd[1443]: time="2025-05-08T23:50:52.691359114Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 8 23:50:52.692071 containerd[1443]: time="2025-05-08T23:50:52.692047154Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:52.695694 containerd[1443]: time="2025-05-08T23:50:52.695651954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:52.696741 containerd[1443]: time="2025-05-08T23:50:52.696701394Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.55445712s" May 8 23:50:52.696777 containerd[1443]: time="2025-05-08T23:50:52.696745074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 8 23:50:52.697573 containerd[1443]: time="2025-05-08T23:50:52.697531554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 23:50:54.076697 containerd[1443]: time="2025-05-08T23:50:54.076644874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:54.077633 containerd[1443]: time="2025-05-08T23:50:54.077436394Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 8 23:50:54.078297 containerd[1443]: time="2025-05-08T23:50:54.078272234Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:54.081435 containerd[1443]: time="2025-05-08T23:50:54.081401874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:54.082584 containerd[1443]: time="2025-05-08T23:50:54.082539714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.38497412s" May 8 23:50:54.082584 containerd[1443]: time="2025-05-08T23:50:54.082573354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 8 23:50:54.083032 containerd[1443]: time="2025-05-08T23:50:54.083002194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 23:50:55.392162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904085410.mount: Deactivated successfully. May 8 23:50:55.741010 containerd[1443]: time="2025-05-08T23:50:55.740893234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:55.741696 containerd[1443]: time="2025-05-08T23:50:55.741461154Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 8 23:50:55.742919 containerd[1443]: time="2025-05-08T23:50:55.742875714Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:55.745412 containerd[1443]: time="2025-05-08T23:50:55.745359434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:55.746087 containerd[1443]: time="2025-05-08T23:50:55.746052794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.66301868s" May 8 23:50:55.746147 containerd[1443]: time="2025-05-08T23:50:55.746086794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 8 23:50:55.746730 containerd[1443]: time="2025-05-08T23:50:55.746552314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 23:50:56.355605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198197665.mount: Deactivated successfully. May 8 23:50:57.281123 containerd[1443]: time="2025-05-08T23:50:57.281070994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.282879 containerd[1443]: time="2025-05-08T23:50:57.282782474Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 8 23:50:57.285409 containerd[1443]: time="2025-05-08T23:50:57.283874434Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.286530 containerd[1443]: time="2025-05-08T23:50:57.286495634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.288139 containerd[1443]: time="2025-05-08T23:50:57.288096314Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.54151524s" May 8 23:50:57.288139 containerd[1443]: time="2025-05-08T23:50:57.288138514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 23:50:57.288712 containerd[1443]: time="2025-05-08T23:50:57.288556954Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 23:50:57.759907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715253655.mount: Deactivated successfully. May 8 23:50:57.765297 containerd[1443]: time="2025-05-08T23:50:57.765251114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.766016 containerd[1443]: time="2025-05-08T23:50:57.765915954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 8 23:50:57.766788 containerd[1443]: time="2025-05-08T23:50:57.766744594Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.768962 containerd[1443]: time="2025-05-08T23:50:57.768922234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:50:57.773171 containerd[1443]: time="2025-05-08T23:50:57.773130954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.5422ms" May 8 23:50:57.773386 containerd[1443]: time="2025-05-08T23:50:57.773285514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 23:50:57.773781 containerd[1443]: time="2025-05-08T23:50:57.773743314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 23:50:58.339796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793315643.mount: Deactivated successfully. May 8 23:50:59.028750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 23:50:59.039033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:50:59.129109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:50:59.132607 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:50:59.165426 kubelet[2032]: E0508 23:50:59.165371 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:50:59.167670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:50:59.167819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:51:00.947074 containerd[1443]: time="2025-05-08T23:51:00.947027874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:00.947509 containerd[1443]: time="2025-05-08T23:51:00.947462714Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 8 23:51:00.948314 containerd[1443]: time="2025-05-08T23:51:00.948248914Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:00.951581 containerd[1443]: time="2025-05-08T23:51:00.951548714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:00.953085 containerd[1443]: time="2025-05-08T23:51:00.953048994Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.17926976s" May 8 23:51:00.953085 containerd[1443]: time="2025-05-08T23:51:00.953082514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 8 23:51:07.952516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:07.964051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:07.982714 systemd[1]: Reloading requested from client PID 2075 ('systemctl') (unit session-7.scope)... May 8 23:51:07.982731 systemd[1]: Reloading... May 8 23:51:08.044879 zram_generator::config[2114]: No configuration found. May 8 23:51:08.153221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:08.205128 systemd[1]: Reloading finished in 222 ms. May 8 23:51:08.246393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:08.247852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:08.249888 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:51:08.250060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:08.251424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:08.345857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:08.350263 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:51:08.386334 kubelet[2161]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:08.386334 kubelet[2161]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:51:08.386334 kubelet[2161]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:08.386683 kubelet[2161]: I0508 23:51:08.386470 2161 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:51:09.603461 kubelet[2161]: I0508 23:51:09.602890 2161 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 23:51:09.603461 kubelet[2161]: I0508 23:51:09.602922 2161 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:51:09.603461 kubelet[2161]: I0508 23:51:09.603330 2161 server.go:929] "Client rotation is on, will bootstrap in background" May 8 23:51:09.646027 kubelet[2161]: I0508 23:51:09.645985 2161 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:51:09.647211 kubelet[2161]: E0508 23:51:09.647177 2161 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:09.652238 kubelet[2161]: E0508 23:51:09.652209 2161 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:51:09.652238 kubelet[2161]: I0508 23:51:09.652237 2161 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:51:09.655471 kubelet[2161]: I0508 23:51:09.655447 2161 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:51:09.656273 kubelet[2161]: I0508 23:51:09.656247 2161 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 23:51:09.656417 kubelet[2161]: I0508 23:51:09.656380 2161 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:51:09.656573 kubelet[2161]: I0508 23:51:09.656410 2161 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:51:09.656709 kubelet[2161]: I0508 23:51:09.656697 2161 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:51:09.656709 kubelet[2161]: I0508 23:51:09.656709 2161 container_manager_linux.go:300] "Creating device plugin manager" May 8 23:51:09.656919 kubelet[2161]: I0508 23:51:09.656906 2161 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:09.661018 kubelet[2161]: I0508 23:51:09.660536 2161 kubelet.go:408] "Attempting to sync node with API server" May 8 23:51:09.661018 kubelet[2161]: I0508 23:51:09.660566 2161 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:51:09.661018 kubelet[2161]: I0508 23:51:09.660649 2161 kubelet.go:314] "Adding apiserver pod source" May 8 23:51:09.661018 kubelet[2161]: I0508 23:51:09.660660 2161 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:51:09.662396 kubelet[2161]: I0508 23:51:09.662345 2161 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:51:09.664220 kubelet[2161]: W0508 23:51:09.664162 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:09.664293 kubelet[2161]: E0508 23:51:09.664229 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:09.664510 kubelet[2161]: W0508 23:51:09.664356 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:09.664549 kubelet[2161]: E0508 23:51:09.664521 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:09.665027 kubelet[2161]: I0508 23:51:09.664970 2161 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:51:09.665670 kubelet[2161]: W0508 23:51:09.665648 2161 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:51:09.667105 kubelet[2161]: I0508 23:51:09.666952 2161 server.go:1269] "Started kubelet" May 8 23:51:09.667169 kubelet[2161]: I0508 23:51:09.667090 2161 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:51:09.667743 kubelet[2161]: I0508 23:51:09.667462 2161 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:51:09.667743 kubelet[2161]: I0508 23:51:09.667710 2161 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:51:09.669220 kubelet[2161]: I0508 23:51:09.668679 2161 server.go:460] "Adding debug handlers to kubelet server" May 8 23:51:09.669736 kubelet[2161]: I0508 23:51:09.669535 2161 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:51:09.670354 kubelet[2161]: I0508 23:51:09.670336 2161 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:51:09.674160 kubelet[2161]: E0508 23:51:09.671795 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:09.674160 kubelet[2161]: I0508 23:51:09.671910 2161 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 23:51:09.674160 kubelet[2161]: I0508 23:51:09.672084 2161 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 23:51:09.674160 kubelet[2161]: I0508 23:51:09.672149 2161 reconciler.go:26] "Reconciler: start to sync state" May 8 23:51:09.674160 kubelet[2161]: W0508 23:51:09.672444 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:09.674160 kubelet[2161]: E0508 23:51:09.672485 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:09.674344 kubelet[2161]: I0508 23:51:09.674326 2161 factory.go:221] Registration of the systemd container factory successfully May 8 23:51:09.674452 kubelet[2161]: I0508 23:51:09.674427 2161 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:51:09.674625 kubelet[2161]: E0508 23:51:09.674592 2161 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" May 8 23:51:09.674683 kubelet[2161]: E0508 23:51:09.670492 2161 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db24bed8685fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 23:51:09.666928122 +0000 UTC m=+1.313295889,LastTimestamp:2025-05-08 23:51:09.666928122 +0000 UTC m=+1.313295889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 23:51:09.676039 kubelet[2161]: E0508 23:51:09.676012 2161 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:51:09.676579 kubelet[2161]: I0508 23:51:09.676542 2161 factory.go:221] Registration of the containerd container factory successfully May 8 23:51:09.686420 kubelet[2161]: I0508 23:51:09.686336 2161 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:51:09.687332 kubelet[2161]: I0508 23:51:09.687308 2161 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:51:09.687332 kubelet[2161]: I0508 23:51:09.687330 2161 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:51:09.687430 kubelet[2161]: I0508 23:51:09.687345 2161 kubelet.go:2321] "Starting kubelet main sync loop" May 8 23:51:09.687430 kubelet[2161]: E0508 23:51:09.687391 2161 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:51:09.691329 kubelet[2161]: I0508 23:51:09.691297 2161 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:51:09.691329 kubelet[2161]: I0508 23:51:09.691317 2161 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:51:09.691329 kubelet[2161]: I0508 23:51:09.691334 2161 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:09.692068 kubelet[2161]: W0508 23:51:09.691975 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:09.692068 kubelet[2161]: E0508 23:51:09.692053 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:09.754734 kubelet[2161]: I0508 23:51:09.754662 2161 policy_none.go:49] "None policy: Start" May 8 23:51:09.755660 kubelet[2161]: I0508 23:51:09.755635 2161 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:51:09.755660 kubelet[2161]: I0508 23:51:09.755668 2161 state_mem.go:35] "Initializing new in-memory state store" May 8 23:51:09.762132 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 23:51:09.772644 kubelet[2161]: E0508 23:51:09.772605 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:09.779823 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 23:51:09.782677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 23:51:09.788390 kubelet[2161]: E0508 23:51:09.788354 2161 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 23:51:09.794588 kubelet[2161]: I0508 23:51:09.794542 2161 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:51:09.795095 kubelet[2161]: I0508 23:51:09.794748 2161 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:51:09.795095 kubelet[2161]: I0508 23:51:09.794765 2161 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:51:09.795095 kubelet[2161]: I0508 23:51:09.795023 2161 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:51:09.796479 kubelet[2161]: E0508 23:51:09.796448 2161 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 23:51:09.875708 kubelet[2161]: E0508 23:51:09.875579 2161 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" May 8 23:51:09.896660 kubelet[2161]: I0508 23:51:09.896631 2161 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:09.897212 kubelet[2161]: E0508 23:51:09.897185 2161 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 8 23:51:10.001995 systemd[1]: Created slice kubepods-burstable-podd9db2f6597d14ab2700792c4436b3593.slice - libcontainer container kubepods-burstable-podd9db2f6597d14ab2700792c4436b3593.slice. May 8 23:51:10.029117 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 23:51:10.044516 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 23:51:10.074193 kubelet[2161]: I0508 23:51:10.074123 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:10.074193 kubelet[2161]: I0508 23:51:10.074162 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:10.074193 kubelet[2161]: I0508 23:51:10.074195 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:10.074418 kubelet[2161]: I0508 23:51:10.074213 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:10.074418 kubelet[2161]: I0508 23:51:10.074228 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:10.074418 kubelet[2161]: I0508 23:51:10.074243 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:10.074418 kubelet[2161]: I0508 23:51:10.074259 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:10.074418 kubelet[2161]: I0508 23:51:10.074274 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:10.074542 kubelet[2161]: I0508 23:51:10.074288 2161 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 23:51:10.099334 kubelet[2161]: I0508 23:51:10.099303 2161 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:10.099684 kubelet[2161]: E0508 23:51:10.099647 2161 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 8 23:51:10.276280 kubelet[2161]: E0508 23:51:10.276130 2161 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" May 8 23:51:10.326619 kubelet[2161]: E0508 23:51:10.326548 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:10.327323 containerd[1443]: time="2025-05-08T23:51:10.327272925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9db2f6597d14ab2700792c4436b3593,Namespace:kube-system,Attempt:0,}" May 8 23:51:10.331474 kubelet[2161]: E0508 23:51:10.331441 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:10.331903 containerd[1443]: time="2025-05-08T23:51:10.331867752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 23:51:10.347605 kubelet[2161]: E0508 23:51:10.347328 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:10.347887 containerd[1443]: time="2025-05-08T23:51:10.347806087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 23:51:10.500732 kubelet[2161]: I0508 23:51:10.500669 2161 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:10.501004 kubelet[2161]: E0508 23:51:10.500978 2161 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 8 23:51:10.509482 kubelet[2161]: W0508 23:51:10.509431 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:10.509547 kubelet[2161]: E0508 23:51:10.509494 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:10.898595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116659893.mount: Deactivated successfully. May 8 23:51:10.902997 containerd[1443]: time="2025-05-08T23:51:10.902947255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:10.904205 containerd[1443]: time="2025-05-08T23:51:10.904138222Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:10.905711 containerd[1443]: time="2025-05-08T23:51:10.905672711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 8 23:51:10.906531 containerd[1443]: time="2025-05-08T23:51:10.906396955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:51:10.908099 containerd[1443]: time="2025-05-08T23:51:10.908050005Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:10.910020 containerd[1443]: time="2025-05-08T23:51:10.909964456Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:10.910202 containerd[1443]: time="2025-05-08T23:51:10.910163058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:51:10.912014 containerd[1443]: time="2025-05-08T23:51:10.911976948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:51:10.913104 containerd[1443]: time="2025-05-08T23:51:10.913072475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 581.133122ms" May 8 23:51:10.914588 containerd[1443]: time="2025-05-08T23:51:10.914380483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.025517ms" May 8 23:51:10.917248 containerd[1443]: time="2025-05-08T23:51:10.917221299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.317132ms" May 8 23:51:10.947700 kubelet[2161]: W0508 23:51:10.947643 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:10.952938 kubelet[2161]: E0508 23:51:10.952618 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:11.032769 kubelet[2161]: W0508 23:51:11.032660 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:11.032769 kubelet[2161]: E0508 23:51:11.032737 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:11.069557 containerd[1443]: time="2025-05-08T23:51:11.069438936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:11.069557 containerd[1443]: time="2025-05-08T23:51:11.069514896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:11.069557 containerd[1443]: time="2025-05-08T23:51:11.069531256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.070076 containerd[1443]: time="2025-05-08T23:51:11.070025979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.070785 containerd[1443]: time="2025-05-08T23:51:11.070607822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:11.070785 containerd[1443]: time="2025-05-08T23:51:11.070668902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:11.070785 containerd[1443]: time="2025-05-08T23:51:11.070688423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.070785 containerd[1443]: time="2025-05-08T23:51:11.070759783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.071044 containerd[1443]: time="2025-05-08T23:51:11.070965424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:11.071105 containerd[1443]: time="2025-05-08T23:51:11.071066465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:11.071157 containerd[1443]: time="2025-05-08T23:51:11.071127905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.071285 containerd[1443]: time="2025-05-08T23:51:11.071237626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:11.077257 kubelet[2161]: E0508 23:51:11.077215 2161 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" May 8 23:51:11.096114 systemd[1]: Started cri-containerd-39c2be0e40208fb314c74c8fdc2de34ae32aabdd34ebca0996b33e65a959bdfd.scope - libcontainer container 39c2be0e40208fb314c74c8fdc2de34ae32aabdd34ebca0996b33e65a959bdfd. May 8 23:51:11.097329 systemd[1]: Started cri-containerd-e08054d9397b218dc8620b519b6525f3d95421599ecf43c7802111d453ab8fe2.scope - libcontainer container e08054d9397b218dc8620b519b6525f3d95421599ecf43c7802111d453ab8fe2. May 8 23:51:11.100652 systemd[1]: Started cri-containerd-298dcfe244373394354ee83a0353c0ec8b4030f91b7421bab00c2f3b82f1d677.scope - libcontainer container 298dcfe244373394354ee83a0353c0ec8b4030f91b7421bab00c2f3b82f1d677. May 8 23:51:11.102491 kubelet[2161]: W0508 23:51:11.102386 2161 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 8 23:51:11.102491 kubelet[2161]: E0508 23:51:11.102464 2161 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 8 23:51:11.134327 containerd[1443]: time="2025-05-08T23:51:11.134262776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9db2f6597d14ab2700792c4436b3593,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c2be0e40208fb314c74c8fdc2de34ae32aabdd34ebca0996b33e65a959bdfd\"" May 8 23:51:11.135410 containerd[1443]: time="2025-05-08T23:51:11.135371902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e08054d9397b218dc8620b519b6525f3d95421599ecf43c7802111d453ab8fe2\"" May 8 23:51:11.136649 kubelet[2161]: E0508 23:51:11.136556 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:11.137380 kubelet[2161]: E0508 23:51:11.136807 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:11.138400 containerd[1443]: time="2025-05-08T23:51:11.138359358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"298dcfe244373394354ee83a0353c0ec8b4030f91b7421bab00c2f3b82f1d677\"" May 8 23:51:11.139528 kubelet[2161]: E0508 23:51:11.139503 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:11.140053 containerd[1443]: time="2025-05-08T23:51:11.140001247Z" level=info msg="CreateContainer within sandbox \"39c2be0e40208fb314c74c8fdc2de34ae32aabdd34ebca0996b33e65a959bdfd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 23:51:11.140135 containerd[1443]: time="2025-05-08T23:51:11.140017248Z" level=info msg="CreateContainer within sandbox \"e08054d9397b218dc8620b519b6525f3d95421599ecf43c7802111d453ab8fe2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 23:51:11.141510 containerd[1443]: time="2025-05-08T23:51:11.141476576Z" level=info msg="CreateContainer within sandbox \"298dcfe244373394354ee83a0353c0ec8b4030f91b7421bab00c2f3b82f1d677\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 23:51:11.160633 containerd[1443]: time="2025-05-08T23:51:11.160513801Z" level=info msg="CreateContainer within sandbox \"39c2be0e40208fb314c74c8fdc2de34ae32aabdd34ebca0996b33e65a959bdfd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d15f423d24c555d7a2b49f6225bf513eb997d4173f46173be6edb252bb49bdc\"" May 8 23:51:11.161428 containerd[1443]: time="2025-05-08T23:51:11.161351166Z" level=info msg="StartContainer for \"8d15f423d24c555d7a2b49f6225bf513eb997d4173f46173be6edb252bb49bdc\"" May 8 23:51:11.163629 containerd[1443]: time="2025-05-08T23:51:11.163490098Z" level=info msg="CreateContainer within sandbox \"298dcfe244373394354ee83a0353c0ec8b4030f91b7421bab00c2f3b82f1d677\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9bdcaa0abdf65d7e86feecbc82cd6888f367022f53ddbf58327ce1f64174d944\"" May 8 23:51:11.164025 containerd[1443]: time="2025-05-08T23:51:11.163983221Z" level=info msg="StartContainer for \"9bdcaa0abdf65d7e86feecbc82cd6888f367022f53ddbf58327ce1f64174d944\"" May 8 23:51:11.169402 containerd[1443]: time="2025-05-08T23:51:11.169355650Z" level=info msg="CreateContainer within sandbox \"e08054d9397b218dc8620b519b6525f3d95421599ecf43c7802111d453ab8fe2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62bc1c9872cf601f208548db0cf195b47f2c8d5a59b90858ccfb4392cbff7dac\"" May 8 23:51:11.170125 containerd[1443]: time="2025-05-08T23:51:11.170045854Z" level=info msg="StartContainer for \"62bc1c9872cf601f208548db0cf195b47f2c8d5a59b90858ccfb4392cbff7dac\"" May 8 23:51:11.190017 systemd[1]: Started cri-containerd-8d15f423d24c555d7a2b49f6225bf513eb997d4173f46173be6edb252bb49bdc.scope - libcontainer container 8d15f423d24c555d7a2b49f6225bf513eb997d4173f46173be6edb252bb49bdc. May 8 23:51:11.194153 systemd[1]: Started cri-containerd-62bc1c9872cf601f208548db0cf195b47f2c8d5a59b90858ccfb4392cbff7dac.scope - libcontainer container 62bc1c9872cf601f208548db0cf195b47f2c8d5a59b90858ccfb4392cbff7dac. May 8 23:51:11.195702 systemd[1]: Started cri-containerd-9bdcaa0abdf65d7e86feecbc82cd6888f367022f53ddbf58327ce1f64174d944.scope - libcontainer container 9bdcaa0abdf65d7e86feecbc82cd6888f367022f53ddbf58327ce1f64174d944. May 8 23:51:11.234644 containerd[1443]: time="2025-05-08T23:51:11.234590053Z" level=info msg="StartContainer for \"62bc1c9872cf601f208548db0cf195b47f2c8d5a59b90858ccfb4392cbff7dac\" returns successfully" May 8 23:51:11.234644 containerd[1443]: time="2025-05-08T23:51:11.234676413Z" level=info msg="StartContainer for \"9bdcaa0abdf65d7e86feecbc82cd6888f367022f53ddbf58327ce1f64174d944\" returns successfully" May 8 23:51:11.234644 containerd[1443]: time="2025-05-08T23:51:11.234604613Z" level=info msg="StartContainer for \"8d15f423d24c555d7a2b49f6225bf513eb997d4173f46173be6edb252bb49bdc\" returns successfully" May 8 23:51:11.312782 kubelet[2161]: I0508 23:51:11.306999 2161 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:11.312782 kubelet[2161]: E0508 23:51:11.307289 2161 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 8 23:51:11.698405 kubelet[2161]: E0508 23:51:11.698364 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:11.702501 kubelet[2161]: E0508 23:51:11.702477 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:11.704251 kubelet[2161]: E0508 23:51:11.704225 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:12.704857 kubelet[2161]: E0508 23:51:12.704815 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:12.705202 kubelet[2161]: E0508 23:51:12.705165 2161 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:12.830554 kubelet[2161]: E0508 23:51:12.830503 2161 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 23:51:12.908630 kubelet[2161]: I0508 23:51:12.908585 2161 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:12.924196 kubelet[2161]: I0508 23:51:12.924037 2161 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 23:51:12.924196 kubelet[2161]: E0508 23:51:12.924077 2161 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 23:51:12.933924 kubelet[2161]: E0508 23:51:12.933064 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.034666 kubelet[2161]: E0508 23:51:13.034524 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.134998 kubelet[2161]: E0508 23:51:13.134957 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.235886 kubelet[2161]: E0508 23:51:13.235835 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.336463 kubelet[2161]: E0508 23:51:13.336371 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.437159 kubelet[2161]: E0508 23:51:13.437107 2161 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:13.664168 kubelet[2161]: I0508 23:51:13.664072 2161 apiserver.go:52] "Watching apiserver" May 8 23:51:13.672534 kubelet[2161]: I0508 23:51:13.672480 2161 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 23:51:14.813704 systemd[1]: Reloading requested from client PID 2445 ('systemctl') (unit session-7.scope)... May 8 23:51:14.813720 systemd[1]: Reloading... May 8 23:51:14.867870 zram_generator::config[2484]: No configuration found. May 8 23:51:14.951648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:51:15.015986 systemd[1]: Reloading finished in 201 ms. May 8 23:51:15.050494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:15.066001 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:51:15.066249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:15.066298 systemd[1]: kubelet.service: Consumed 1.651s CPU time, 116.7M memory peak, 0B memory swap peak. May 8 23:51:15.075192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:51:15.162027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:51:15.165648 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:51:15.197267 kubelet[2526]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:15.197267 kubelet[2526]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 23:51:15.197267 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:51:15.197583 kubelet[2526]: I0508 23:51:15.197310 2526 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:51:15.206861 kubelet[2526]: I0508 23:51:15.206690 2526 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 23:51:15.206861 kubelet[2526]: I0508 23:51:15.206717 2526 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:51:15.206989 kubelet[2526]: I0508 23:51:15.206968 2526 server.go:929] "Client rotation is on, will bootstrap in background" May 8 23:51:15.208382 kubelet[2526]: I0508 23:51:15.208249 2526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 23:51:15.210818 kubelet[2526]: I0508 23:51:15.210672 2526 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:51:15.215274 kubelet[2526]: E0508 23:51:15.215214 2526 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:51:15.215274 kubelet[2526]: I0508 23:51:15.215266 2526 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:51:15.218041 kubelet[2526]: I0508 23:51:15.218014 2526 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:51:15.218826 kubelet[2526]: I0508 23:51:15.218230 2526 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 23:51:15.218826 kubelet[2526]: I0508 23:51:15.218337 2526 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:51:15.218826 kubelet[2526]: I0508 23:51:15.218365 2526 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:51:15.218826 kubelet[2526]: I0508 23:51:15.218531 2526 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218539 2526 container_manager_linux.go:300] "Creating device plugin manager" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218567 2526 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218659 2526 kubelet.go:408] "Attempting to sync node with API server" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218670 2526 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218691 2526 kubelet.go:314] "Adding apiserver pod source" May 8 23:51:15.219036 kubelet[2526]: I0508 23:51:15.218702 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:51:15.222848 kubelet[2526]: I0508 23:51:15.219629 2526 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:51:15.223749 kubelet[2526]: I0508 23:51:15.223722 2526 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:51:15.224155 kubelet[2526]: I0508 23:51:15.224134 2526 server.go:1269] "Started kubelet" May 8 23:51:15.226857 kubelet[2526]: I0508 23:51:15.225650 2526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:51:15.226857 kubelet[2526]: I0508 23:51:15.226485 2526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:51:15.229845 kubelet[2526]: I0508 23:51:15.227262 2526 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:51:15.229845 kubelet[2526]: I0508 23:51:15.227327 2526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:51:15.229845 kubelet[2526]: I0508 23:51:15.228160 2526 server.go:460] "Adding debug handlers to kubelet server" May 8 23:51:15.231773 kubelet[2526]: I0508 23:51:15.231033 2526 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:51:15.236337 kubelet[2526]: I0508 23:51:15.236309 2526 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 23:51:15.237088 kubelet[2526]: E0508 23:51:15.236518 2526 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 23:51:15.237924 kubelet[2526]: I0508 23:51:15.237897 2526 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 23:51:15.238036 kubelet[2526]: I0508 23:51:15.238018 2526 reconciler.go:26] "Reconciler: start to sync state" May 8 23:51:15.239626 kubelet[2526]: I0508 23:51:15.239590 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:51:15.242665 kubelet[2526]: E0508 23:51:15.242279 2526 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:51:15.242665 kubelet[2526]: I0508 23:51:15.242371 2526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:51:15.242665 kubelet[2526]: I0508 23:51:15.242401 2526 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 23:51:15.242665 kubelet[2526]: I0508 23:51:15.242418 2526 kubelet.go:2321] "Starting kubelet main sync loop" May 8 23:51:15.242665 kubelet[2526]: E0508 23:51:15.242454 2526 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:51:15.244879 kubelet[2526]: I0508 23:51:15.244234 2526 factory.go:221] Registration of the systemd container factory successfully May 8 23:51:15.244879 kubelet[2526]: I0508 23:51:15.244310 2526 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:51:15.245254 kubelet[2526]: I0508 23:51:15.245229 2526 factory.go:221] Registration of the containerd container factory successfully May 8 23:51:15.272944 kubelet[2526]: I0508 23:51:15.272918 2526 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 23:51:15.272944 kubelet[2526]: I0508 23:51:15.272939 2526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 23:51:15.273056 kubelet[2526]: I0508 23:51:15.272959 2526 state_mem.go:36] "Initialized new in-memory state store" May 8 23:51:15.273111 kubelet[2526]: I0508 23:51:15.273092 2526 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 23:51:15.273136 kubelet[2526]: I0508 23:51:15.273109 2526 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 23:51:15.273136 kubelet[2526]: I0508 23:51:15.273127 2526 policy_none.go:49] "None policy: Start" May 8 23:51:15.273669 kubelet[2526]: I0508 23:51:15.273635 2526 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 23:51:15.273713 kubelet[2526]: I0508 23:51:15.273677 2526 state_mem.go:35] "Initializing new in-memory state store" May 8 23:51:15.273871 kubelet[2526]: I0508 23:51:15.273833 2526 state_mem.go:75] "Updated machine memory state" May 8 23:51:15.277611 kubelet[2526]: I0508 23:51:15.277569 2526 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:51:15.277741 kubelet[2526]: I0508 23:51:15.277721 2526 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:51:15.277787 kubelet[2526]: I0508 23:51:15.277738 2526 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:51:15.278427 kubelet[2526]: I0508 23:51:15.278326 2526 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:51:15.381580 kubelet[2526]: I0508 23:51:15.381460 2526 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 23:51:15.387450 kubelet[2526]: I0508 23:51:15.387420 2526 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 23:51:15.387517 kubelet[2526]: I0508 23:51:15.387497 2526 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 23:51:15.439550 kubelet[2526]: I0508 23:51:15.439505 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:15.439670 kubelet[2526]: I0508 23:51:15.439572 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:15.439670 kubelet[2526]: I0508 23:51:15.439611 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 23:51:15.439670 kubelet[2526]: I0508 23:51:15.439642 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:15.439806 kubelet[2526]: I0508 23:51:15.439688 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:15.439806 kubelet[2526]: I0508 23:51:15.439704 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:15.439806 kubelet[2526]: I0508 23:51:15.439718 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9db2f6597d14ab2700792c4436b3593-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9db2f6597d14ab2700792c4436b3593\") " pod="kube-system/kube-apiserver-localhost" May 8 23:51:15.439806 kubelet[2526]: I0508 23:51:15.439732 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:15.439806 kubelet[2526]: I0508 23:51:15.439746 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 23:51:15.654466 kubelet[2526]: E0508 23:51:15.654159 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:15.654466 kubelet[2526]: E0508 23:51:15.654162 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:15.654671 kubelet[2526]: E0508 23:51:15.654508 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:15.814474 sudo[2562]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 23:51:15.814741 sudo[2562]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 23:51:16.219935 kubelet[2526]: I0508 23:51:16.219889 2526 apiserver.go:52] "Watching apiserver" May 8 23:51:16.238799 kubelet[2526]: I0508 23:51:16.238757 2526 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 23:51:16.260882 kubelet[2526]: E0508 23:51:16.259032 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:16.269048 sudo[2562]: pam_unix(sudo:session): session closed for user root May 8 23:51:16.292237 kubelet[2526]: E0508 23:51:16.292171 2526 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 23:51:16.292342 kubelet[2526]: E0508 23:51:16.292322 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:16.293256 kubelet[2526]: E0508 23:51:16.292668 2526 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 23:51:16.293256 kubelet[2526]: E0508 23:51:16.292769 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:16.300133 kubelet[2526]: I0508 23:51:16.300079 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.299998142 podStartE2EDuration="1.299998142s" podCreationTimestamp="2025-05-08 23:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:16.29200911 +0000 UTC m=+1.123334181" watchObservedRunningTime="2025-05-08 23:51:16.299998142 +0000 UTC m=+1.131323213" May 8 23:51:16.307119 kubelet[2526]: I0508 23:51:16.307072 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.306459208 podStartE2EDuration="1.306459208s" podCreationTimestamp="2025-05-08 23:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:16.306129207 +0000 UTC m=+1.137454318" watchObservedRunningTime="2025-05-08 23:51:16.306459208 +0000 UTC m=+1.137784319" May 8 23:51:16.307231 kubelet[2526]: I0508 23:51:16.307165 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.307160251 podStartE2EDuration="1.307160251s" podCreationTimestamp="2025-05-08 23:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:16.300153183 +0000 UTC m=+1.131478294" watchObservedRunningTime="2025-05-08 23:51:16.307160251 +0000 UTC m=+1.138485362" May 8 23:51:17.260254 kubelet[2526]: E0508 23:51:17.259885 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:17.260254 kubelet[2526]: E0508 23:51:17.259990 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:18.175123 sudo[1623]: pam_unix(sudo:session): session closed for user root May 8 23:51:18.176171 sshd[1622]: Connection closed by 10.0.0.1 port 44830 May 8 23:51:18.176604 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 8 23:51:18.180037 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:44830.service: Deactivated successfully. May 8 23:51:18.181569 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:51:18.181796 systemd[1]: session-7.scope: Consumed 9.313s CPU time, 154.3M memory peak, 0B memory swap peak. May 8 23:51:18.182342 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. May 8 23:51:18.183340 systemd-logind[1428]: Removed session 7. May 8 23:51:20.604179 kubelet[2526]: I0508 23:51:20.604130 2526 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 23:51:20.604592 containerd[1443]: time="2025-05-08T23:51:20.604490416Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:51:20.604765 kubelet[2526]: I0508 23:51:20.604664 2526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 23:51:21.503205 systemd[1]: Created slice kubepods-besteffort-podc14aefd6_f837_4b04_a84f_e4f393492ff0.slice - libcontainer container kubepods-besteffort-podc14aefd6_f837_4b04_a84f_e4f393492ff0.slice. May 8 23:51:21.521328 systemd[1]: Created slice kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice - libcontainer container kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice. May 8 23:51:21.643306 update_engine[1432]: I20250508 23:51:21.643234 1432 update_attempter.cc:509] Updating boot flags... May 8 23:51:21.661316 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2611) May 8 23:51:21.676201 kubelet[2526]: I0508 23:51:21.676162 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hostproc\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.676201 kubelet[2526]: I0508 23:51:21.676201 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52s6\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-kube-api-access-x52s6\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676224 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6bt\" (UniqueName: \"kubernetes.io/projected/c14aefd6-f837-4b04-a84f-e4f393492ff0-kube-api-access-jp6bt\") pod \"kube-proxy-pvkqx\" (UID: \"c14aefd6-f837-4b04-a84f-e4f393492ff0\") " pod="kube-system/kube-proxy-pvkqx" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676240 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-bpf-maps\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676263 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-cgroup\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676280 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-run\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676315 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-etc-cni-netd\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678769 kubelet[2526]: I0508 23:51:21.676355 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-xtables-lock\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676390 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c14aefd6-f837-4b04-a84f-e4f393492ff0-kube-proxy\") pod \"kube-proxy-pvkqx\" (UID: \"c14aefd6-f837-4b04-a84f-e4f393492ff0\") " pod="kube-system/kube-proxy-pvkqx" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676410 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c14aefd6-f837-4b04-a84f-e4f393492ff0-lib-modules\") pod \"kube-proxy-pvkqx\" (UID: \"c14aefd6-f837-4b04-a84f-e4f393492ff0\") " pod="kube-system/kube-proxy-pvkqx" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676446 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-net\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676463 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hubble-tls\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676481 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-lib-modules\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.678930 kubelet[2526]: I0508 23:51:21.676498 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cni-path\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.679053 kubelet[2526]: I0508 23:51:21.676516 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80dac02b-8055-4c3e-adb8-1982c0bbba5c-clustermesh-secrets\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.679053 kubelet[2526]: I0508 23:51:21.676533 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-config-path\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.679053 kubelet[2526]: I0508 23:51:21.676549 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-kernel\") pod \"cilium-4skqb\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " pod="kube-system/cilium-4skqb" May 8 23:51:21.679053 kubelet[2526]: I0508 23:51:21.676564 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c14aefd6-f837-4b04-a84f-e4f393492ff0-xtables-lock\") pod \"kube-proxy-pvkqx\" (UID: \"c14aefd6-f837-4b04-a84f-e4f393492ff0\") " pod="kube-system/kube-proxy-pvkqx" May 8 23:51:21.719067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (2615) May 8 23:51:21.722353 systemd[1]: Created slice kubepods-besteffort-pode38f8ae8_25d8_4ac4_addf_64e8114f623f.slice - libcontainer container kubepods-besteffort-pode38f8ae8_25d8_4ac4_addf_64e8114f623f.slice. May 8 23:51:21.814678 kubelet[2526]: E0508 23:51:21.814560 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:21.816067 containerd[1443]: time="2025-05-08T23:51:21.816016941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvkqx,Uid:c14aefd6-f837-4b04-a84f-e4f393492ff0,Namespace:kube-system,Attempt:0,}" May 8 23:51:21.826936 kubelet[2526]: E0508 23:51:21.826908 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:21.828605 containerd[1443]: time="2025-05-08T23:51:21.828081896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4skqb,Uid:80dac02b-8055-4c3e-adb8-1982c0bbba5c,Namespace:kube-system,Attempt:0,}" May 8 23:51:21.836968 containerd[1443]: time="2025-05-08T23:51:21.836856442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:21.836968 containerd[1443]: time="2025-05-08T23:51:21.836915042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:21.836968 containerd[1443]: time="2025-05-08T23:51:21.836931202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:21.837730 containerd[1443]: time="2025-05-08T23:51:21.837002962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:21.848086 containerd[1443]: time="2025-05-08T23:51:21.847982554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:21.848213 containerd[1443]: time="2025-05-08T23:51:21.848096595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:21.848213 containerd[1443]: time="2025-05-08T23:51:21.848115515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:21.848674 containerd[1443]: time="2025-05-08T23:51:21.848623036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:21.854105 systemd[1]: Started cri-containerd-d9e4af20b7b74dc52e737072e12c87e09f451a660c4b80ca982d4aab0d1f1094.scope - libcontainer container d9e4af20b7b74dc52e737072e12c87e09f451a660c4b80ca982d4aab0d1f1094. May 8 23:51:21.875028 systemd[1]: Started cri-containerd-a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a.scope - libcontainer container a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a. May 8 23:51:21.877940 kubelet[2526]: I0508 23:51:21.877862 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rkbs\" (UniqueName: \"kubernetes.io/projected/e38f8ae8-25d8-4ac4-addf-64e8114f623f-kube-api-access-7rkbs\") pod \"cilium-operator-5d85765b45-bcqtv\" (UID: \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\") " pod="kube-system/cilium-operator-5d85765b45-bcqtv" May 8 23:51:21.878190 kubelet[2526]: I0508 23:51:21.878127 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38f8ae8-25d8-4ac4-addf-64e8114f623f-cilium-config-path\") pod \"cilium-operator-5d85765b45-bcqtv\" (UID: \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\") " pod="kube-system/cilium-operator-5d85765b45-bcqtv" May 8 23:51:21.892825 containerd[1443]: time="2025-05-08T23:51:21.892776005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvkqx,Uid:c14aefd6-f837-4b04-a84f-e4f393492ff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9e4af20b7b74dc52e737072e12c87e09f451a660c4b80ca982d4aab0d1f1094\"" May 8 23:51:21.894291 kubelet[2526]: E0508 23:51:21.894252 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:21.898294 containerd[1443]: time="2025-05-08T23:51:21.898021780Z" level=info msg="CreateContainer within sandbox \"d9e4af20b7b74dc52e737072e12c87e09f451a660c4b80ca982d4aab0d1f1094\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:51:21.898455 containerd[1443]: time="2025-05-08T23:51:21.898218141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4skqb,Uid:80dac02b-8055-4c3e-adb8-1982c0bbba5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\"" May 8 23:51:21.899018 kubelet[2526]: E0508 23:51:21.898996 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:21.900160 containerd[1443]: time="2025-05-08T23:51:21.900050346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 23:51:21.920057 containerd[1443]: time="2025-05-08T23:51:21.920006684Z" level=info msg="CreateContainer within sandbox \"d9e4af20b7b74dc52e737072e12c87e09f451a660c4b80ca982d4aab0d1f1094\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d4bdf62e3286ee7eb4a4d96b1d5c9003f8b14dd03ac015b325730555c62fc3cd\"" May 8 23:51:21.920688 containerd[1443]: time="2025-05-08T23:51:21.920616926Z" level=info msg="StartContainer for \"d4bdf62e3286ee7eb4a4d96b1d5c9003f8b14dd03ac015b325730555c62fc3cd\"" May 8 23:51:21.946066 systemd[1]: Started cri-containerd-d4bdf62e3286ee7eb4a4d96b1d5c9003f8b14dd03ac015b325730555c62fc3cd.scope - libcontainer container d4bdf62e3286ee7eb4a4d96b1d5c9003f8b14dd03ac015b325730555c62fc3cd. May 8 23:51:21.970775 containerd[1443]: time="2025-05-08T23:51:21.970719352Z" level=info msg="StartContainer for \"d4bdf62e3286ee7eb4a4d96b1d5c9003f8b14dd03ac015b325730555c62fc3cd\" returns successfully" May 8 23:51:22.034227 kubelet[2526]: E0508 23:51:22.033628 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:22.035410 containerd[1443]: time="2025-05-08T23:51:22.035344174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bcqtv,Uid:e38f8ae8-25d8-4ac4-addf-64e8114f623f,Namespace:kube-system,Attempt:0,}" May 8 23:51:22.057501 containerd[1443]: time="2025-05-08T23:51:22.057420874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:22.057501 containerd[1443]: time="2025-05-08T23:51:22.057472194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:22.057659 containerd[1443]: time="2025-05-08T23:51:22.057482874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:22.057659 containerd[1443]: time="2025-05-08T23:51:22.057550554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:22.077073 systemd[1]: Started cri-containerd-8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac.scope - libcontainer container 8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac. May 8 23:51:22.105396 containerd[1443]: time="2025-05-08T23:51:22.105240685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bcqtv,Uid:e38f8ae8-25d8-4ac4-addf-64e8114f623f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\"" May 8 23:51:22.106119 kubelet[2526]: E0508 23:51:22.106096 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:22.268588 kubelet[2526]: E0508 23:51:22.268544 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:22.278708 kubelet[2526]: I0508 23:51:22.278644 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvkqx" podStartSLOduration=1.278627838 podStartE2EDuration="1.278627838s" podCreationTimestamp="2025-05-08 23:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:22.278414477 +0000 UTC m=+7.109739548" watchObservedRunningTime="2025-05-08 23:51:22.278627838 +0000 UTC m=+7.109952949" May 8 23:51:23.524897 kubelet[2526]: E0508 23:51:23.524806 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:23.772423 kubelet[2526]: E0508 23:51:23.772370 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:24.271511 kubelet[2526]: E0508 23:51:24.271476 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:24.272386 kubelet[2526]: E0508 23:51:24.271969 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:25.098800 kubelet[2526]: E0508 23:51:25.098627 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:25.272873 kubelet[2526]: E0508 23:51:25.272816 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:30.348308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061291943.mount: Deactivated successfully. May 8 23:51:31.503211 containerd[1443]: time="2025-05-08T23:51:31.503160134Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:31.504124 containerd[1443]: time="2025-05-08T23:51:31.503919495Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 23:51:31.504880 containerd[1443]: time="2025-05-08T23:51:31.504849016Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:31.506871 containerd[1443]: time="2025-05-08T23:51:31.506769819Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.606618993s" May 8 23:51:31.506871 containerd[1443]: time="2025-05-08T23:51:31.506803979Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 23:51:31.509302 containerd[1443]: time="2025-05-08T23:51:31.509271383Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 23:51:31.515689 containerd[1443]: time="2025-05-08T23:51:31.515654873Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:51:31.552754 containerd[1443]: time="2025-05-08T23:51:31.552656369Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\"" May 8 23:51:31.553398 containerd[1443]: time="2025-05-08T23:51:31.553227570Z" level=info msg="StartContainer for \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\"" May 8 23:51:31.580037 systemd[1]: Started cri-containerd-a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174.scope - libcontainer container a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174. May 8 23:51:31.609464 containerd[1443]: time="2025-05-08T23:51:31.609346576Z" level=info msg="StartContainer for \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\" returns successfully" May 8 23:51:31.645067 systemd[1]: cri-containerd-a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174.scope: Deactivated successfully. May 8 23:51:31.828575 containerd[1443]: time="2025-05-08T23:51:31.823424663Z" level=info msg="shim disconnected" id=a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174 namespace=k8s.io May 8 23:51:31.828575 containerd[1443]: time="2025-05-08T23:51:31.828510871Z" level=warning msg="cleaning up after shim disconnected" id=a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174 namespace=k8s.io May 8 23:51:31.828575 containerd[1443]: time="2025-05-08T23:51:31.828525071Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:51:32.295055 kubelet[2526]: E0508 23:51:32.295024 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:32.298098 containerd[1443]: time="2025-05-08T23:51:32.297031838Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:51:32.308031 containerd[1443]: time="2025-05-08T23:51:32.307988854Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\"" May 8 23:51:32.308444 containerd[1443]: time="2025-05-08T23:51:32.308386335Z" level=info msg="StartContainer for \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\"" May 8 23:51:32.337016 systemd[1]: Started cri-containerd-8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2.scope - libcontainer container 8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2. May 8 23:51:32.355503 containerd[1443]: time="2025-05-08T23:51:32.355459362Z" level=info msg="StartContainer for \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\" returns successfully" May 8 23:51:32.368201 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:51:32.368410 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:51:32.368467 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 23:51:32.374280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:51:32.374465 systemd[1]: cri-containerd-8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2.scope: Deactivated successfully. May 8 23:51:32.392507 containerd[1443]: time="2025-05-08T23:51:32.392450055Z" level=info msg="shim disconnected" id=8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2 namespace=k8s.io May 8 23:51:32.392507 containerd[1443]: time="2025-05-08T23:51:32.392503175Z" level=warning msg="cleaning up after shim disconnected" id=8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2 namespace=k8s.io May 8 23:51:32.392507 containerd[1443]: time="2025-05-08T23:51:32.392512655Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:51:32.407699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:51:32.550479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174-rootfs.mount: Deactivated successfully. May 8 23:51:33.298412 kubelet[2526]: E0508 23:51:33.298382 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:33.300492 containerd[1443]: time="2025-05-08T23:51:33.300303808Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:51:33.330084 containerd[1443]: time="2025-05-08T23:51:33.330035488Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\"" May 8 23:51:33.330561 containerd[1443]: time="2025-05-08T23:51:33.330534809Z" level=info msg="StartContainer for \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\"" May 8 23:51:33.357998 systemd[1]: Started cri-containerd-d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db.scope - libcontainer container d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db. May 8 23:51:33.388122 containerd[1443]: time="2025-05-08T23:51:33.388071366Z" level=info msg="StartContainer for \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\" returns successfully" May 8 23:51:33.402535 systemd[1]: cri-containerd-d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db.scope: Deactivated successfully. May 8 23:51:33.430166 containerd[1443]: time="2025-05-08T23:51:33.430110542Z" level=info msg="shim disconnected" id=d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db namespace=k8s.io May 8 23:51:33.430166 containerd[1443]: time="2025-05-08T23:51:33.430161982Z" level=warning msg="cleaning up after shim disconnected" id=d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db namespace=k8s.io May 8 23:51:33.430166 containerd[1443]: time="2025-05-08T23:51:33.430170782Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:51:33.550118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db-rootfs.mount: Deactivated successfully. May 8 23:51:33.639850 containerd[1443]: time="2025-05-08T23:51:33.639794384Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:33.640452 containerd[1443]: time="2025-05-08T23:51:33.640403905Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 23:51:33.641069 containerd[1443]: time="2025-05-08T23:51:33.641038426Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:51:33.642382 containerd[1443]: time="2025-05-08T23:51:33.642349987Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.133043924s" May 8 23:51:33.642434 containerd[1443]: time="2025-05-08T23:51:33.642386587Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 23:51:33.645581 containerd[1443]: time="2025-05-08T23:51:33.645539992Z" level=info msg="CreateContainer within sandbox \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 23:51:33.671282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055278232.mount: Deactivated successfully. May 8 23:51:33.674281 containerd[1443]: time="2025-05-08T23:51:33.674245870Z" level=info msg="CreateContainer within sandbox \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\"" May 8 23:51:33.674776 containerd[1443]: time="2025-05-08T23:51:33.674755471Z" level=info msg="StartContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\"" May 8 23:51:33.701985 systemd[1]: Started cri-containerd-47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e.scope - libcontainer container 47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e. May 8 23:51:33.723102 containerd[1443]: time="2025-05-08T23:51:33.723062576Z" level=info msg="StartContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" returns successfully" May 8 23:51:34.304866 kubelet[2526]: E0508 23:51:34.304587 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:34.306756 kubelet[2526]: E0508 23:51:34.306664 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:34.308409 containerd[1443]: time="2025-05-08T23:51:34.308363416Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:51:34.336330 kubelet[2526]: I0508 23:51:34.336270 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bcqtv" podStartSLOduration=1.7998794710000001 podStartE2EDuration="13.336256811s" podCreationTimestamp="2025-05-08 23:51:21 +0000 UTC" firstStartedPulling="2025-05-08 23:51:22.106963249 +0000 UTC m=+6.938288360" lastFinishedPulling="2025-05-08 23:51:33.643340589 +0000 UTC m=+18.474665700" observedRunningTime="2025-05-08 23:51:34.335954051 +0000 UTC m=+19.167279162" watchObservedRunningTime="2025-05-08 23:51:34.336256811 +0000 UTC m=+19.167581882" May 8 23:51:34.427397 containerd[1443]: time="2025-05-08T23:51:34.427167886Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\"" May 8 23:51:34.429040 containerd[1443]: time="2025-05-08T23:51:34.428073887Z" level=info msg="StartContainer for \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\"" May 8 23:51:34.465994 systemd[1]: Started cri-containerd-8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7.scope - libcontainer container 8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7. May 8 23:51:34.485776 systemd[1]: cri-containerd-8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7.scope: Deactivated successfully. May 8 23:51:34.487560 containerd[1443]: time="2025-05-08T23:51:34.487182201Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice/cri-containerd-8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7.scope/memory.events\": no such file or directory" May 8 23:51:34.488597 containerd[1443]: time="2025-05-08T23:51:34.488556403Z" level=info msg="StartContainer for \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\" returns successfully" May 8 23:51:34.531032 containerd[1443]: time="2025-05-08T23:51:34.530960896Z" level=info msg="shim disconnected" id=8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7 namespace=k8s.io May 8 23:51:34.531032 containerd[1443]: time="2025-05-08T23:51:34.531025496Z" level=warning msg="cleaning up after shim disconnected" id=8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7 namespace=k8s.io May 8 23:51:34.531032 containerd[1443]: time="2025-05-08T23:51:34.531034736Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:51:35.310090 kubelet[2526]: E0508 23:51:35.310039 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:35.310736 kubelet[2526]: E0508 23:51:35.310491 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:35.315005 containerd[1443]: time="2025-05-08T23:51:35.314906058Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:51:35.337858 containerd[1443]: time="2025-05-08T23:51:35.337762525Z" level=info msg="CreateContainer within sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\"" May 8 23:51:35.338446 containerd[1443]: time="2025-05-08T23:51:35.338421366Z" level=info msg="StartContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\"" May 8 23:51:35.363034 systemd[1]: Started cri-containerd-c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7.scope - libcontainer container c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7. May 8 23:51:35.387099 containerd[1443]: time="2025-05-08T23:51:35.385577102Z" level=info msg="StartContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" returns successfully" May 8 23:51:35.550029 systemd[1]: run-containerd-runc-k8s.io-c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7-runc.zktwiN.mount: Deactivated successfully. May 8 23:51:35.585642 kubelet[2526]: I0508 23:51:35.585561 2526 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 23:51:35.621936 systemd[1]: Created slice kubepods-burstable-poddbbb781a_ee4f_4dc6_bf5c_57e6cefe1621.slice - libcontainer container kubepods-burstable-poddbbb781a_ee4f_4dc6_bf5c_57e6cefe1621.slice. May 8 23:51:35.630271 systemd[1]: Created slice kubepods-burstable-pod9d961d68_f76b_4796_8d84_6471b43c7340.slice - libcontainer container kubepods-burstable-pod9d961d68_f76b_4796_8d84_6471b43c7340.slice. May 8 23:51:35.777264 kubelet[2526]: I0508 23:51:35.777208 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621-config-volume\") pod \"coredns-6f6b679f8f-2qtnj\" (UID: \"dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621\") " pod="kube-system/coredns-6f6b679f8f-2qtnj" May 8 23:51:35.777264 kubelet[2526]: I0508 23:51:35.777255 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d961d68-f76b-4796-8d84-6471b43c7340-config-volume\") pod \"coredns-6f6b679f8f-pm2mq\" (UID: \"9d961d68-f76b-4796-8d84-6471b43c7340\") " pod="kube-system/coredns-6f6b679f8f-pm2mq" May 8 23:51:35.777420 kubelet[2526]: I0508 23:51:35.777274 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlkt\" (UniqueName: \"kubernetes.io/projected/9d961d68-f76b-4796-8d84-6471b43c7340-kube-api-access-bxlkt\") pod \"coredns-6f6b679f8f-pm2mq\" (UID: \"9d961d68-f76b-4796-8d84-6471b43c7340\") " pod="kube-system/coredns-6f6b679f8f-pm2mq" May 8 23:51:35.777420 kubelet[2526]: I0508 23:51:35.777296 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdqd\" (UniqueName: \"kubernetes.io/projected/dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621-kube-api-access-stdqd\") pod \"coredns-6f6b679f8f-2qtnj\" (UID: \"dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621\") " pod="kube-system/coredns-6f6b679f8f-2qtnj" May 8 23:51:35.925601 kubelet[2526]: E0508 23:51:35.925505 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:35.928202 containerd[1443]: time="2025-05-08T23:51:35.928165182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qtnj,Uid:dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621,Namespace:kube-system,Attempt:0,}" May 8 23:51:35.933412 kubelet[2526]: E0508 23:51:35.933384 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:35.934011 containerd[1443]: time="2025-05-08T23:51:35.933972429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pm2mq,Uid:9d961d68-f76b-4796-8d84-6471b43c7340,Namespace:kube-system,Attempt:0,}" May 8 23:51:36.313956 kubelet[2526]: E0508 23:51:36.313500 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:36.328213 kubelet[2526]: I0508 23:51:36.328166 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4skqb" podStartSLOduration=5.71853611 podStartE2EDuration="15.328151389s" podCreationTimestamp="2025-05-08 23:51:21 +0000 UTC" firstStartedPulling="2025-05-08 23:51:21.899507384 +0000 UTC m=+6.730832455" lastFinishedPulling="2025-05-08 23:51:31.509122623 +0000 UTC m=+16.340447734" observedRunningTime="2025-05-08 23:51:36.327174388 +0000 UTC m=+21.158499459" watchObservedRunningTime="2025-05-08 23:51:36.328151389 +0000 UTC m=+21.159476500" May 8 23:51:37.315547 kubelet[2526]: E0508 23:51:37.315509 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:37.534038 systemd-networkd[1376]: cilium_host: Link UP May 8 23:51:37.535443 systemd-networkd[1376]: cilium_net: Link UP May 8 23:51:37.536063 systemd-networkd[1376]: cilium_net: Gained carrier May 8 23:51:37.536972 systemd-networkd[1376]: cilium_host: Gained carrier May 8 23:51:37.623432 systemd-networkd[1376]: cilium_vxlan: Link UP May 8 23:51:37.623440 systemd-networkd[1376]: cilium_vxlan: Gained carrier May 8 23:51:37.894993 systemd-networkd[1376]: cilium_net: Gained IPv6LL May 8 23:51:37.908878 kernel: NET: Registered PF_ALG protocol family May 8 23:51:38.254986 systemd-networkd[1376]: cilium_host: Gained IPv6LL May 8 23:51:38.316745 kubelet[2526]: E0508 23:51:38.316701 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:38.487548 systemd-networkd[1376]: lxc_health: Link UP May 8 23:51:38.488722 systemd-networkd[1376]: lxc_health: Gained carrier May 8 23:51:38.569761 systemd-networkd[1376]: lxc831bb8dd53b8: Link UP May 8 23:51:38.584878 kernel: eth0: renamed from tmp23f66 May 8 23:51:38.596624 systemd-networkd[1376]: lxc831bb8dd53b8: Gained carrier May 8 23:51:38.596880 systemd-networkd[1376]: lxc5762e636fde3: Link UP May 8 23:51:38.607867 kernel: eth0: renamed from tmpcb7e6 May 8 23:51:38.611396 systemd-networkd[1376]: lxc5762e636fde3: Gained carrier May 8 23:51:39.407272 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL May 8 23:51:39.834567 kubelet[2526]: E0508 23:51:39.834065 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:39.919212 systemd-networkd[1376]: lxc5762e636fde3: Gained IPv6LL May 8 23:51:40.111369 systemd-networkd[1376]: lxc_health: Gained IPv6LL May 8 23:51:40.319801 kubelet[2526]: E0508 23:51:40.319750 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:40.431068 systemd-networkd[1376]: lxc831bb8dd53b8: Gained IPv6LL May 8 23:51:41.321300 kubelet[2526]: E0508 23:51:41.321246 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:41.662037 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:34260.service - OpenSSH per-connection server daemon (10.0.0.1:34260). May 8 23:51:41.716419 sshd[3765]: Accepted publickey for core from 10.0.0.1 port 34260 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:41.718139 sshd-session[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:41.722107 systemd-logind[1428]: New session 8 of user core. May 8 23:51:41.735017 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 23:51:41.864593 sshd[3767]: Connection closed by 10.0.0.1 port 34260 May 8 23:51:41.864912 sshd-session[3765]: pam_unix(sshd:session): session closed for user core May 8 23:51:41.868913 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:34260.service: Deactivated successfully. May 8 23:51:41.870563 systemd[1]: session-8.scope: Deactivated successfully. May 8 23:51:41.871292 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. May 8 23:51:41.872260 systemd-logind[1428]: Removed session 8. May 8 23:51:42.102037 containerd[1443]: time="2025-05-08T23:51:42.101947386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:42.102037 containerd[1443]: time="2025-05-08T23:51:42.102003186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:42.102550 containerd[1443]: time="2025-05-08T23:51:42.102018626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:42.102550 containerd[1443]: time="2025-05-08T23:51:42.102095946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:42.108320 containerd[1443]: time="2025-05-08T23:51:42.107775550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:51:42.108635 containerd[1443]: time="2025-05-08T23:51:42.108294951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:51:42.108635 containerd[1443]: time="2025-05-08T23:51:42.108307631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:42.108635 containerd[1443]: time="2025-05-08T23:51:42.108396951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:51:42.115078 systemd[1]: run-containerd-runc-k8s.io-23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453-runc.Daupis.mount: Deactivated successfully. May 8 23:51:42.128015 systemd[1]: Started cri-containerd-23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453.scope - libcontainer container 23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453. May 8 23:51:42.130764 systemd[1]: Started cri-containerd-cb7e6e06cb0cdb3220e75880a15e9ed39a603c39a13a1cf245c919db2ace624a.scope - libcontainer container cb7e6e06cb0cdb3220e75880a15e9ed39a603c39a13a1cf245c919db2ace624a. May 8 23:51:42.138893 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:51:42.143862 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 23:51:42.157617 containerd[1443]: time="2025-05-08T23:51:42.157511548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2qtnj,Uid:dbbb781a-ee4f-4dc6-bf5c-57e6cefe1621,Namespace:kube-system,Attempt:0,} returns sandbox id \"23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453\"" May 8 23:51:42.158431 kubelet[2526]: E0508 23:51:42.158303 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.161947 containerd[1443]: time="2025-05-08T23:51:42.161258271Z" level=info msg="CreateContainer within sandbox \"23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:51:42.165974 containerd[1443]: time="2025-05-08T23:51:42.165943954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pm2mq,Uid:9d961d68-f76b-4796-8d84-6471b43c7340,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb7e6e06cb0cdb3220e75880a15e9ed39a603c39a13a1cf245c919db2ace624a\"" May 8 23:51:42.167969 kubelet[2526]: E0508 23:51:42.167939 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.172621 containerd[1443]: time="2025-05-08T23:51:42.172582999Z" level=info msg="CreateContainer within sandbox \"cb7e6e06cb0cdb3220e75880a15e9ed39a603c39a13a1cf245c919db2ace624a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:51:42.175533 containerd[1443]: time="2025-05-08T23:51:42.175430721Z" level=info msg="CreateContainer within sandbox \"23f660223b7c8476378bb18198dd7fcfa24216893346e36d8021a30810cbd453\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d0260c9686cf713ec8aa4dd35d9cc152d4ea487f65478e677782cb0a4b78a8c\"" May 8 23:51:42.177504 containerd[1443]: time="2025-05-08T23:51:42.176327562Z" level=info msg="StartContainer for \"6d0260c9686cf713ec8aa4dd35d9cc152d4ea487f65478e677782cb0a4b78a8c\"" May 8 23:51:42.182419 containerd[1443]: time="2025-05-08T23:51:42.182380246Z" level=info msg="CreateContainer within sandbox \"cb7e6e06cb0cdb3220e75880a15e9ed39a603c39a13a1cf245c919db2ace624a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2118806d7af762dddaec1f3c928fcec8e00b8c2332d2fa76c5d32fafbc6b9b4\"" May 8 23:51:42.183055 containerd[1443]: time="2025-05-08T23:51:42.182997647Z" level=info msg="StartContainer for \"b2118806d7af762dddaec1f3c928fcec8e00b8c2332d2fa76c5d32fafbc6b9b4\"" May 8 23:51:42.202123 systemd[1]: Started cri-containerd-6d0260c9686cf713ec8aa4dd35d9cc152d4ea487f65478e677782cb0a4b78a8c.scope - libcontainer container 6d0260c9686cf713ec8aa4dd35d9cc152d4ea487f65478e677782cb0a4b78a8c. May 8 23:51:42.207084 systemd[1]: Started cri-containerd-b2118806d7af762dddaec1f3c928fcec8e00b8c2332d2fa76c5d32fafbc6b9b4.scope - libcontainer container b2118806d7af762dddaec1f3c928fcec8e00b8c2332d2fa76c5d32fafbc6b9b4. May 8 23:51:42.238579 containerd[1443]: time="2025-05-08T23:51:42.238506889Z" level=info msg="StartContainer for \"b2118806d7af762dddaec1f3c928fcec8e00b8c2332d2fa76c5d32fafbc6b9b4\" returns successfully" May 8 23:51:42.238715 containerd[1443]: time="2025-05-08T23:51:42.238520129Z" level=info msg="StartContainer for \"6d0260c9686cf713ec8aa4dd35d9cc152d4ea487f65478e677782cb0a4b78a8c\" returns successfully" May 8 23:51:42.325566 kubelet[2526]: E0508 23:51:42.325054 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.336850 kubelet[2526]: E0508 23:51:42.335060 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:42.357597 kubelet[2526]: I0508 23:51:42.357445 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pm2mq" podStartSLOduration=21.357427738 podStartE2EDuration="21.357427738s" podCreationTimestamp="2025-05-08 23:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:42.34730705 +0000 UTC m=+27.178632161" watchObservedRunningTime="2025-05-08 23:51:42.357427738 +0000 UTC m=+27.188752849" May 8 23:51:43.336036 kubelet[2526]: E0508 23:51:43.335341 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:43.336036 kubelet[2526]: E0508 23:51:43.335464 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:43.421447 kubelet[2526]: I0508 23:51:43.421381 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2qtnj" podStartSLOduration=22.421364277 podStartE2EDuration="22.421364277s" podCreationTimestamp="2025-05-08 23:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:51:42.359233019 +0000 UTC m=+27.190558130" watchObservedRunningTime="2025-05-08 23:51:43.421364277 +0000 UTC m=+28.252689348" May 8 23:51:44.337502 kubelet[2526]: E0508 23:51:44.337388 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:44.337502 kubelet[2526]: E0508 23:51:44.337446 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:51:46.879398 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). May 8 23:51:46.929601 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:46.930996 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:46.934720 systemd-logind[1428]: New session 9 of user core. May 8 23:51:46.940988 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 23:51:47.060893 sshd[3955]: Connection closed by 10.0.0.1 port 40778 May 8 23:51:47.060867 sshd-session[3953]: pam_unix(sshd:session): session closed for user core May 8 23:51:47.064022 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:40778.service: Deactivated successfully. May 8 23:51:47.065686 systemd[1]: session-9.scope: Deactivated successfully. May 8 23:51:47.066300 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. May 8 23:51:47.067139 systemd-logind[1428]: Removed session 9. May 8 23:51:52.073488 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:40786.service - OpenSSH per-connection server daemon (10.0.0.1:40786). May 8 23:51:52.118403 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 40786 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:52.119580 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:52.123831 systemd-logind[1428]: New session 10 of user core. May 8 23:51:52.131009 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 23:51:52.244564 sshd[3970]: Connection closed by 10.0.0.1 port 40786 May 8 23:51:52.245229 sshd-session[3968]: pam_unix(sshd:session): session closed for user core May 8 23:51:52.248650 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:40786.service: Deactivated successfully. May 8 23:51:52.250788 systemd[1]: session-10.scope: Deactivated successfully. May 8 23:51:52.251664 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. May 8 23:51:52.254325 systemd-logind[1428]: Removed session 10. May 8 23:51:57.258178 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:35030.service - OpenSSH per-connection server daemon (10.0.0.1:35030). May 8 23:51:57.317430 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 35030 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:57.319474 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:57.326608 systemd-logind[1428]: New session 11 of user core. May 8 23:51:57.333663 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 23:51:57.454367 sshd[3987]: Connection closed by 10.0.0.1 port 35030 May 8 23:51:57.454913 sshd-session[3985]: pam_unix(sshd:session): session closed for user core May 8 23:51:57.470927 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:35030.service: Deactivated successfully. May 8 23:51:57.473578 systemd[1]: session-11.scope: Deactivated successfully. May 8 23:51:57.475262 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. May 8 23:51:57.477239 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:35042.service - OpenSSH per-connection server daemon (10.0.0.1:35042). May 8 23:51:57.478353 systemd-logind[1428]: Removed session 11. May 8 23:51:57.522490 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 35042 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:57.523828 sshd-session[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:57.528026 systemd-logind[1428]: New session 12 of user core. May 8 23:51:57.541035 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 23:51:57.689934 sshd[4002]: Connection closed by 10.0.0.1 port 35042 May 8 23:51:57.691199 sshd-session[4000]: pam_unix(sshd:session): session closed for user core May 8 23:51:57.703329 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:35042.service: Deactivated successfully. May 8 23:51:57.707417 systemd[1]: session-12.scope: Deactivated successfully. May 8 23:51:57.710734 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. May 8 23:51:57.722293 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). May 8 23:51:57.723094 systemd-logind[1428]: Removed session 12. May 8 23:51:57.764747 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:51:57.766003 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:51:57.770257 systemd-logind[1428]: New session 13 of user core. May 8 23:51:57.780999 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 23:51:57.892637 sshd[4015]: Connection closed by 10.0.0.1 port 35052 May 8 23:51:57.892988 sshd-session[4013]: pam_unix(sshd:session): session closed for user core May 8 23:51:57.896023 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:35052.service: Deactivated successfully. May 8 23:51:57.898670 systemd[1]: session-13.scope: Deactivated successfully. May 8 23:51:57.899711 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. May 8 23:51:57.901105 systemd-logind[1428]: Removed session 13. May 8 23:52:02.907737 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:36836.service - OpenSSH per-connection server daemon (10.0.0.1:36836). May 8 23:52:02.952140 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 36836 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:02.953429 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:02.957469 systemd-logind[1428]: New session 14 of user core. May 8 23:52:02.965014 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 23:52:03.073930 sshd[4030]: Connection closed by 10.0.0.1 port 36836 May 8 23:52:03.074295 sshd-session[4028]: pam_unix(sshd:session): session closed for user core May 8 23:52:03.077379 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:36836.service: Deactivated successfully. May 8 23:52:03.080304 systemd[1]: session-14.scope: Deactivated successfully. May 8 23:52:03.081324 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. May 8 23:52:03.082517 systemd-logind[1428]: Removed session 14. May 8 23:52:08.090343 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:36848.service - OpenSSH per-connection server daemon (10.0.0.1:36848). May 8 23:52:08.135945 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 36848 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:08.137117 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:08.140902 systemd-logind[1428]: New session 15 of user core. May 8 23:52:08.147008 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 23:52:08.252582 sshd[4045]: Connection closed by 10.0.0.1 port 36848 May 8 23:52:08.253125 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 8 23:52:08.263668 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:36848.service: Deactivated successfully. May 8 23:52:08.265881 systemd[1]: session-15.scope: Deactivated successfully. May 8 23:52:08.269064 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. May 8 23:52:08.270732 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:36852.service - OpenSSH per-connection server daemon (10.0.0.1:36852). May 8 23:52:08.271545 systemd-logind[1428]: Removed session 15. May 8 23:52:08.316365 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 36852 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:08.317608 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:08.321889 systemd-logind[1428]: New session 16 of user core. May 8 23:52:08.328043 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 23:52:08.524152 sshd[4059]: Connection closed by 10.0.0.1 port 36852 May 8 23:52:08.525225 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 8 23:52:08.531366 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:36852.service: Deactivated successfully. May 8 23:52:08.533654 systemd[1]: session-16.scope: Deactivated successfully. May 8 23:52:08.535326 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. May 8 23:52:08.546140 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:36868.service - OpenSSH per-connection server daemon (10.0.0.1:36868). May 8 23:52:08.547549 systemd-logind[1428]: Removed session 16. May 8 23:52:08.592157 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 36868 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:08.593437 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:08.597139 systemd-logind[1428]: New session 17 of user core. May 8 23:52:08.605007 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 23:52:09.935072 sshd[4071]: Connection closed by 10.0.0.1 port 36868 May 8 23:52:09.935451 sshd-session[4069]: pam_unix(sshd:session): session closed for user core May 8 23:52:09.943498 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:36868.service: Deactivated successfully. May 8 23:52:09.947344 systemd[1]: session-17.scope: Deactivated successfully. May 8 23:52:09.951120 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. May 8 23:52:09.958299 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). May 8 23:52:09.960835 systemd-logind[1428]: Removed session 17. May 8 23:52:10.005855 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:10.007080 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:10.010927 systemd-logind[1428]: New session 18 of user core. May 8 23:52:10.021051 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 23:52:10.234181 sshd[4092]: Connection closed by 10.0.0.1 port 36884 May 8 23:52:10.234556 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 8 23:52:10.246039 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:36884.service: Deactivated successfully. May 8 23:52:10.247826 systemd[1]: session-18.scope: Deactivated successfully. May 8 23:52:10.249670 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. May 8 23:52:10.251332 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:36900.service - OpenSSH per-connection server daemon (10.0.0.1:36900). May 8 23:52:10.253003 systemd-logind[1428]: Removed session 18. May 8 23:52:10.296772 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:10.298180 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:10.301999 systemd-logind[1428]: New session 19 of user core. May 8 23:52:10.314260 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 23:52:10.436633 sshd[4104]: Connection closed by 10.0.0.1 port 36900 May 8 23:52:10.437347 sshd-session[4102]: pam_unix(sshd:session): session closed for user core May 8 23:52:10.440588 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:36900.service: Deactivated successfully. May 8 23:52:10.442349 systemd[1]: session-19.scope: Deactivated successfully. May 8 23:52:10.444290 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. May 8 23:52:10.445209 systemd-logind[1428]: Removed session 19. May 8 23:52:15.448011 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:58828.service - OpenSSH per-connection server daemon (10.0.0.1:58828). May 8 23:52:15.493787 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 58828 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:15.494945 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:15.498884 systemd-logind[1428]: New session 20 of user core. May 8 23:52:15.505005 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 23:52:15.610713 sshd[4125]: Connection closed by 10.0.0.1 port 58828 May 8 23:52:15.611045 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 8 23:52:15.614235 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:58828.service: Deactivated successfully. May 8 23:52:15.616156 systemd[1]: session-20.scope: Deactivated successfully. May 8 23:52:15.617009 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. May 8 23:52:15.617783 systemd-logind[1428]: Removed session 20. May 8 23:52:20.621433 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:58836.service - OpenSSH per-connection server daemon (10.0.0.1:58836). May 8 23:52:20.666071 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 58836 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:20.667246 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:20.670904 systemd-logind[1428]: New session 21 of user core. May 8 23:52:20.677993 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 23:52:20.781663 sshd[4140]: Connection closed by 10.0.0.1 port 58836 May 8 23:52:20.782245 sshd-session[4138]: pam_unix(sshd:session): session closed for user core May 8 23:52:20.787047 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:58836.service: Deactivated successfully. May 8 23:52:20.788656 systemd[1]: session-21.scope: Deactivated successfully. May 8 23:52:20.788891 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit. May 8 23:52:20.790203 systemd-logind[1428]: Removed session 21. May 8 23:52:25.797574 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:58502.service - OpenSSH per-connection server daemon (10.0.0.1:58502). May 8 23:52:25.841589 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 58502 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:25.842689 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:25.846886 systemd-logind[1428]: New session 22 of user core. May 8 23:52:25.853003 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 23:52:25.963245 sshd[4158]: Connection closed by 10.0.0.1 port 58502 May 8 23:52:25.964142 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 8 23:52:25.970391 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:58502.service: Deactivated successfully. May 8 23:52:25.971816 systemd[1]: session-22.scope: Deactivated successfully. May 8 23:52:25.975669 systemd-logind[1428]: Session 22 logged out. Waiting for processes to exit. May 8 23:52:25.982248 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:58512.service - OpenSSH per-connection server daemon (10.0.0.1:58512). May 8 23:52:25.983288 systemd-logind[1428]: Removed session 22. May 8 23:52:26.023023 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 58512 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:26.024170 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:26.028150 systemd-logind[1428]: New session 23 of user core. May 8 23:52:26.036001 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 23:52:28.653477 containerd[1443]: time="2025-05-08T23:52:28.652377879Z" level=info msg="StopContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" with timeout 30 (s)" May 8 23:52:28.654486 containerd[1443]: time="2025-05-08T23:52:28.653984408Z" level=info msg="Stop container \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" with signal terminated" May 8 23:52:28.678128 systemd[1]: cri-containerd-47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e.scope: Deactivated successfully. May 8 23:52:28.690193 containerd[1443]: time="2025-05-08T23:52:28.690146371Z" level=info msg="StopContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" with timeout 2 (s)" May 8 23:52:28.690618 containerd[1443]: time="2025-05-08T23:52:28.690595534Z" level=info msg="Stop container \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" with signal terminated" May 8 23:52:28.695155 containerd[1443]: time="2025-05-08T23:52:28.694593836Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:52:28.698181 systemd-networkd[1376]: lxc_health: Link DOWN May 8 23:52:28.698186 systemd-networkd[1376]: lxc_health: Lost carrier May 8 23:52:28.706299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e-rootfs.mount: Deactivated successfully. May 8 23:52:28.711791 containerd[1443]: time="2025-05-08T23:52:28.711730533Z" level=info msg="shim disconnected" id=47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e namespace=k8s.io May 8 23:52:28.711791 containerd[1443]: time="2025-05-08T23:52:28.711786293Z" level=warning msg="cleaning up after shim disconnected" id=47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e namespace=k8s.io May 8 23:52:28.711791 containerd[1443]: time="2025-05-08T23:52:28.711795133Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:28.727686 systemd[1]: cri-containerd-c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7.scope: Deactivated successfully. May 8 23:52:28.728865 systemd[1]: cri-containerd-c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7.scope: Consumed 6.379s CPU time. May 8 23:52:28.745928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7-rootfs.mount: Deactivated successfully. May 8 23:52:28.791456 containerd[1443]: time="2025-05-08T23:52:28.791119059Z" level=info msg="StopContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" returns successfully" May 8 23:52:28.793122 containerd[1443]: time="2025-05-08T23:52:28.792922429Z" level=info msg="shim disconnected" id=c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7 namespace=k8s.io May 8 23:52:28.793122 containerd[1443]: time="2025-05-08T23:52:28.792970589Z" level=warning msg="cleaning up after shim disconnected" id=c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7 namespace=k8s.io May 8 23:52:28.793122 containerd[1443]: time="2025-05-08T23:52:28.792978429Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:28.794918 containerd[1443]: time="2025-05-08T23:52:28.794876000Z" level=info msg="StopPodSandbox for \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\"" May 8 23:52:28.800872 containerd[1443]: time="2025-05-08T23:52:28.800768153Z" level=info msg="Container to stop \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.803466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac-shm.mount: Deactivated successfully. May 8 23:52:28.807971 containerd[1443]: time="2025-05-08T23:52:28.807874153Z" level=info msg="StopContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" returns successfully" May 8 23:52:28.808390 containerd[1443]: time="2025-05-08T23:52:28.808361996Z" level=info msg="StopPodSandbox for \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\"" May 8 23:52:28.808690 containerd[1443]: time="2025-05-08T23:52:28.808522077Z" level=info msg="Container to stop \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.808690 containerd[1443]: time="2025-05-08T23:52:28.808553917Z" level=info msg="Container to stop \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.808690 containerd[1443]: time="2025-05-08T23:52:28.808563477Z" level=info msg="Container to stop \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.808690 containerd[1443]: time="2025-05-08T23:52:28.808572677Z" level=info msg="Container to stop \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.808690 containerd[1443]: time="2025-05-08T23:52:28.808580517Z" level=info msg="Container to stop \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:52:28.810624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a-shm.mount: Deactivated successfully. May 8 23:52:28.815668 systemd[1]: cri-containerd-a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a.scope: Deactivated successfully. May 8 23:52:28.817014 systemd[1]: cri-containerd-8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac.scope: Deactivated successfully. May 8 23:52:28.841407 containerd[1443]: time="2025-05-08T23:52:28.841350701Z" level=info msg="shim disconnected" id=a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a namespace=k8s.io May 8 23:52:28.841937 containerd[1443]: time="2025-05-08T23:52:28.841776503Z" level=warning msg="cleaning up after shim disconnected" id=a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a namespace=k8s.io May 8 23:52:28.841937 containerd[1443]: time="2025-05-08T23:52:28.841800344Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:28.842345 containerd[1443]: time="2025-05-08T23:52:28.841792904Z" level=info msg="shim disconnected" id=8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac namespace=k8s.io May 8 23:52:28.842552 containerd[1443]: time="2025-05-08T23:52:28.842430787Z" level=warning msg="cleaning up after shim disconnected" id=8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac namespace=k8s.io May 8 23:52:28.842552 containerd[1443]: time="2025-05-08T23:52:28.842444507Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:28.857629 containerd[1443]: time="2025-05-08T23:52:28.857582872Z" level=info msg="TearDown network for sandbox \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\" successfully" May 8 23:52:28.857931 containerd[1443]: time="2025-05-08T23:52:28.857819874Z" level=info msg="StopPodSandbox for \"8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac\" returns successfully" May 8 23:52:28.859394 containerd[1443]: time="2025-05-08T23:52:28.859132921Z" level=info msg="TearDown network for sandbox \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" successfully" May 8 23:52:28.859394 containerd[1443]: time="2025-05-08T23:52:28.859155241Z" level=info msg="StopPodSandbox for \"a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a\" returns successfully" May 8 23:52:28.991314 kubelet[2526]: I0508 23:52:28.991245 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-bpf-maps\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991314 kubelet[2526]: I0508 23:52:28.991301 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-lib-modules\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991314 kubelet[2526]: I0508 23:52:28.991317 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cni-path\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991337 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-run\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991358 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-config-path\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991378 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hubble-tls\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991395 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-etc-cni-netd\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991408 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hostproc\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991729 kubelet[2526]: I0508 23:52:28.991421 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-cgroup\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991439 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80dac02b-8055-4c3e-adb8-1982c0bbba5c-clustermesh-secrets\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991455 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38f8ae8-25d8-4ac4-addf-64e8114f623f-cilium-config-path\") pod \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\" (UID: \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991470 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-net\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991485 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-kernel\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991515 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rkbs\" (UniqueName: \"kubernetes.io/projected/e38f8ae8-25d8-4ac4-addf-64e8114f623f-kube-api-access-7rkbs\") pod \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\" (UID: \"e38f8ae8-25d8-4ac4-addf-64e8114f623f\") " May 8 23:52:28.991889 kubelet[2526]: I0508 23:52:28.991532 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x52s6\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-kube-api-access-x52s6\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.992010 kubelet[2526]: I0508 23:52:28.991548 2526 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-xtables-lock\") pod \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\" (UID: \"80dac02b-8055-4c3e-adb8-1982c0bbba5c\") " May 8 23:52:28.997177 kubelet[2526]: I0508 23:52:28.996623 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997177 kubelet[2526]: I0508 23:52:28.996663 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997177 kubelet[2526]: I0508 23:52:28.996622 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997177 kubelet[2526]: I0508 23:52:28.996623 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997177 kubelet[2526]: I0508 23:52:28.996689 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997544 kubelet[2526]: I0508 23:52:28.996711 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997544 kubelet[2526]: I0508 23:52:28.996747 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997544 kubelet[2526]: I0508 23:52:28.996887 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997544 kubelet[2526]: I0508 23:52:28.996944 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.997544 kubelet[2526]: I0508 23:52:28.996963 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 23:52:28.998867 kubelet[2526]: I0508 23:52:28.998601 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e38f8ae8-25d8-4ac4-addf-64e8114f623f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e38f8ae8-25d8-4ac4-addf-64e8114f623f" (UID: "e38f8ae8-25d8-4ac4-addf-64e8114f623f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:52:29.000666 kubelet[2526]: I0508 23:52:29.000629 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dac02b-8055-4c3e-adb8-1982c0bbba5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 23:52:29.000758 kubelet[2526]: I0508 23:52:29.000737 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:29.000881 kubelet[2526]: I0508 23:52:29.000767 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-kube-api-access-x52s6" (OuterVolumeSpecName: "kube-api-access-x52s6") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "kube-api-access-x52s6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:29.000961 kubelet[2526]: I0508 23:52:29.000862 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38f8ae8-25d8-4ac4-addf-64e8114f623f-kube-api-access-7rkbs" (OuterVolumeSpecName: "kube-api-access-7rkbs") pod "e38f8ae8-25d8-4ac4-addf-64e8114f623f" (UID: "e38f8ae8-25d8-4ac4-addf-64e8114f623f"). InnerVolumeSpecName "kube-api-access-7rkbs". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 23:52:29.002455 kubelet[2526]: I0508 23:52:29.002424 2526 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80dac02b-8055-4c3e-adb8-1982c0bbba5c" (UID: "80dac02b-8055-4c3e-adb8-1982c0bbba5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 23:52:29.092308 kubelet[2526]: I0508 23:52:29.092242 2526 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092308 kubelet[2526]: I0508 23:52:29.092295 2526 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092327 2526 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092343 2526 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092391 2526 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092409 2526 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092423 2526 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092433 2526 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092439 2526 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092471 kubelet[2526]: I0508 23:52:29.092447 2526 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80dac02b-8055-4c3e-adb8-1982c0bbba5c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092454 2526 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092462 2526 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092470 2526 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7rkbs\" (UniqueName: \"kubernetes.io/projected/e38f8ae8-25d8-4ac4-addf-64e8114f623f-kube-api-access-7rkbs\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092479 2526 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e38f8ae8-25d8-4ac4-addf-64e8114f623f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092488 2526 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x52s6\" (UniqueName: \"kubernetes.io/projected/80dac02b-8055-4c3e-adb8-1982c0bbba5c-kube-api-access-x52s6\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.092642 kubelet[2526]: I0508 23:52:29.092496 2526 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80dac02b-8055-4c3e-adb8-1982c0bbba5c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 23:52:29.252131 systemd[1]: Removed slice kubepods-besteffort-pode38f8ae8_25d8_4ac4_addf_64e8114f623f.slice - libcontainer container kubepods-besteffort-pode38f8ae8_25d8_4ac4_addf_64e8114f623f.slice. May 8 23:52:29.253200 systemd[1]: Removed slice kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice - libcontainer container kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice. May 8 23:52:29.253294 systemd[1]: kubepods-burstable-pod80dac02b_8055_4c3e_adb8_1982c0bbba5c.slice: Consumed 6.510s CPU time. May 8 23:52:29.435954 kubelet[2526]: I0508 23:52:29.435889 2526 scope.go:117] "RemoveContainer" containerID="47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e" May 8 23:52:29.438153 containerd[1443]: time="2025-05-08T23:52:29.438072191Z" level=info msg="RemoveContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\"" May 8 23:52:29.442672 containerd[1443]: time="2025-05-08T23:52:29.442354374Z" level=info msg="RemoveContainer for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" returns successfully" May 8 23:52:29.443663 kubelet[2526]: I0508 23:52:29.443562 2526 scope.go:117] "RemoveContainer" containerID="47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e" May 8 23:52:29.443860 containerd[1443]: time="2025-05-08T23:52:29.443787662Z" level=error msg="ContainerStatus for \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\": not found" May 8 23:52:29.447133 kubelet[2526]: E0508 23:52:29.447095 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\": not found" containerID="47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e" May 8 23:52:29.447210 kubelet[2526]: I0508 23:52:29.447136 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e"} err="failed to get container status \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\": rpc error: code = NotFound desc = an error occurred when try to find container \"47a1a6d89ffe23143d05fa6cbfef84301c875d003e90515f75b69c477809794e\": not found" May 8 23:52:29.447240 kubelet[2526]: I0508 23:52:29.447215 2526 scope.go:117] "RemoveContainer" containerID="c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7" May 8 23:52:29.449635 containerd[1443]: time="2025-05-08T23:52:29.449432413Z" level=info msg="RemoveContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\"" May 8 23:52:29.453183 containerd[1443]: time="2025-05-08T23:52:29.453060953Z" level=info msg="RemoveContainer for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" returns successfully" May 8 23:52:29.453423 kubelet[2526]: I0508 23:52:29.453401 2526 scope.go:117] "RemoveContainer" containerID="8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7" May 8 23:52:29.454506 containerd[1443]: time="2025-05-08T23:52:29.454481121Z" level=info msg="RemoveContainer for \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\"" May 8 23:52:29.456555 containerd[1443]: time="2025-05-08T23:52:29.456518692Z" level=info msg="RemoveContainer for \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\" returns successfully" May 8 23:52:29.456697 kubelet[2526]: I0508 23:52:29.456679 2526 scope.go:117] "RemoveContainer" containerID="d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db" May 8 23:52:29.457774 containerd[1443]: time="2025-05-08T23:52:29.457564578Z" level=info msg="RemoveContainer for \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\"" May 8 23:52:29.459876 containerd[1443]: time="2025-05-08T23:52:29.459758310Z" level=info msg="RemoveContainer for \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\" returns successfully" May 8 23:52:29.460008 kubelet[2526]: I0508 23:52:29.459941 2526 scope.go:117] "RemoveContainer" containerID="8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2" May 8 23:52:29.464034 containerd[1443]: time="2025-05-08T23:52:29.464002493Z" level=info msg="RemoveContainer for \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\"" May 8 23:52:29.466088 containerd[1443]: time="2025-05-08T23:52:29.466054144Z" level=info msg="RemoveContainer for \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\" returns successfully" May 8 23:52:29.466350 kubelet[2526]: I0508 23:52:29.466310 2526 scope.go:117] "RemoveContainer" containerID="a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174" May 8 23:52:29.467554 containerd[1443]: time="2025-05-08T23:52:29.467524072Z" level=info msg="RemoveContainer for \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\"" May 8 23:52:29.469480 containerd[1443]: time="2025-05-08T23:52:29.469444803Z" level=info msg="RemoveContainer for \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\" returns successfully" May 8 23:52:29.469630 kubelet[2526]: I0508 23:52:29.469602 2526 scope.go:117] "RemoveContainer" containerID="c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7" May 8 23:52:29.469815 containerd[1443]: time="2025-05-08T23:52:29.469779325Z" level=error msg="ContainerStatus for \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\": not found" May 8 23:52:29.469943 kubelet[2526]: E0508 23:52:29.469915 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\": not found" containerID="c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7" May 8 23:52:29.469977 kubelet[2526]: I0508 23:52:29.469947 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7"} err="failed to get container status \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c06e6aeb8f1b472b6a8d4b3953489712553ca8b73f94a58273ea41ed9e5d79a7\": not found" May 8 23:52:29.469977 kubelet[2526]: I0508 23:52:29.469971 2526 scope.go:117] "RemoveContainer" containerID="8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7" May 8 23:52:29.470129 containerd[1443]: time="2025-05-08T23:52:29.470103766Z" level=error msg="ContainerStatus for \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\": not found" May 8 23:52:29.470343 kubelet[2526]: E0508 23:52:29.470217 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\": not found" containerID="8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7" May 8 23:52:29.470343 kubelet[2526]: I0508 23:52:29.470242 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7"} err="failed to get container status \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e5ae42a2e2ce3d0cce5d045c0d547e82e85fcf2f7edc7e38d2022b031122aa7\": not found" May 8 23:52:29.470343 kubelet[2526]: I0508 23:52:29.470259 2526 scope.go:117] "RemoveContainer" containerID="d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db" May 8 23:52:29.470438 containerd[1443]: time="2025-05-08T23:52:29.470406168Z" level=error msg="ContainerStatus for \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\": not found" May 8 23:52:29.470530 kubelet[2526]: E0508 23:52:29.470511 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\": not found" containerID="d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db" May 8 23:52:29.470565 kubelet[2526]: I0508 23:52:29.470534 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db"} err="failed to get container status \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\": rpc error: code = NotFound desc = an error occurred when try to find container \"d77df76dac7d94ac2296cde16f6396b7f26d5ef1bd456598289be220d50e80db\": not found" May 8 23:52:29.470565 kubelet[2526]: I0508 23:52:29.470548 2526 scope.go:117] "RemoveContainer" containerID="8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2" May 8 23:52:29.470686 containerd[1443]: time="2025-05-08T23:52:29.470661929Z" level=error msg="ContainerStatus for \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\": not found" May 8 23:52:29.470781 kubelet[2526]: E0508 23:52:29.470756 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\": not found" containerID="8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2" May 8 23:52:29.470813 kubelet[2526]: I0508 23:52:29.470786 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2"} err="failed to get container status \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d4c5c18a389a907da7bbee6c4058299e06e9905b0c2827d2d196c116700cde2\": not found" May 8 23:52:29.470813 kubelet[2526]: I0508 23:52:29.470800 2526 scope.go:117] "RemoveContainer" containerID="a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174" May 8 23:52:29.471005 containerd[1443]: time="2025-05-08T23:52:29.470963011Z" level=error msg="ContainerStatus for \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\": not found" May 8 23:52:29.471105 kubelet[2526]: E0508 23:52:29.471086 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\": not found" containerID="a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174" May 8 23:52:29.471129 kubelet[2526]: I0508 23:52:29.471109 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174"} err="failed to get container status \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\": rpc error: code = NotFound desc = an error occurred when try to find container \"a183cff96233bee45a5884bd8274706fb7742a9fcb84f87b0d95aac1c7cac174\": not found" May 8 23:52:29.660245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8484b08454b5d8f670447b3c401c98fba08d45019eaf813e651de703df06b7ac-rootfs.mount: Deactivated successfully. May 8 23:52:29.660348 systemd[1]: var-lib-kubelet-pods-e38f8ae8\x2d25d8\x2d4ac4\x2daddf\x2d64e8114f623f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7rkbs.mount: Deactivated successfully. May 8 23:52:29.660408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4fd0ef65f04e0dce5cb3f31117357ebbe7c41644bdae52d114bc2d47fb26b5a-rootfs.mount: Deactivated successfully. May 8 23:52:29.660465 systemd[1]: var-lib-kubelet-pods-80dac02b\x2d8055\x2d4c3e\x2dadb8\x2d1982c0bbba5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx52s6.mount: Deactivated successfully. May 8 23:52:29.660514 systemd[1]: var-lib-kubelet-pods-80dac02b\x2d8055\x2d4c3e\x2dadb8\x2d1982c0bbba5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 23:52:29.660559 systemd[1]: var-lib-kubelet-pods-80dac02b\x2d8055\x2d4c3e\x2dadb8\x2d1982c0bbba5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 23:52:30.246798 kubelet[2526]: E0508 23:52:30.246759 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:30.297422 kubelet[2526]: E0508 23:52:30.297377 2526 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:52:30.595605 sshd[4172]: Connection closed by 10.0.0.1 port 58512 May 8 23:52:30.595962 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 8 23:52:30.608424 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:58512.service: Deactivated successfully. May 8 23:52:30.611007 systemd[1]: session-23.scope: Deactivated successfully. May 8 23:52:30.611163 systemd[1]: session-23.scope: Consumed 1.938s CPU time. May 8 23:52:30.612362 systemd-logind[1428]: Session 23 logged out. Waiting for processes to exit. May 8 23:52:30.613654 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). May 8 23:52:30.614410 systemd-logind[1428]: Removed session 23. May 8 23:52:30.661384 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:30.662506 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:30.666519 systemd-logind[1428]: New session 24 of user core. May 8 23:52:30.683988 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 23:52:31.246655 kubelet[2526]: I0508 23:52:31.245779 2526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" path="/var/lib/kubelet/pods/80dac02b-8055-4c3e-adb8-1982c0bbba5c/volumes" May 8 23:52:31.246655 kubelet[2526]: I0508 23:52:31.246350 2526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38f8ae8-25d8-4ac4-addf-64e8114f623f" path="/var/lib/kubelet/pods/e38f8ae8-25d8-4ac4-addf-64e8114f623f/volumes" May 8 23:52:31.396995 sshd[4335]: Connection closed by 10.0.0.1 port 58514 May 8 23:52:31.397494 sshd-session[4333]: pam_unix(sshd:session): session closed for user core May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409911 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="clean-cilium-state" May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409942 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="cilium-agent" May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409949 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="mount-cgroup" May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409958 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="apply-sysctl-overwrites" May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409964 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="mount-bpf-fs" May 8 23:52:31.412490 kubelet[2526]: E0508 23:52:31.409971 2526 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e38f8ae8-25d8-4ac4-addf-64e8114f623f" containerName="cilium-operator" May 8 23:52:31.412490 kubelet[2526]: I0508 23:52:31.409996 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dac02b-8055-4c3e-adb8-1982c0bbba5c" containerName="cilium-agent" May 8 23:52:31.412490 kubelet[2526]: I0508 23:52:31.410002 2526 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38f8ae8-25d8-4ac4-addf-64e8114f623f" containerName="cilium-operator" May 8 23:52:31.412735 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:58514.service: Deactivated successfully. May 8 23:52:31.415538 systemd[1]: session-24.scope: Deactivated successfully. May 8 23:52:31.417988 systemd-logind[1428]: Session 24 logged out. Waiting for processes to exit. May 8 23:52:31.433200 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:58520.service - OpenSSH per-connection server daemon (10.0.0.1:58520). May 8 23:52:31.435596 systemd-logind[1428]: Removed session 24. May 8 23:52:31.440542 systemd[1]: Created slice kubepods-burstable-podf181feb2_4df1_41f4_874f_8c960d8c2c66.slice - libcontainer container kubepods-burstable-podf181feb2_4df1_41f4_874f_8c960d8c2c66.slice. May 8 23:52:31.478062 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 58520 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:31.479444 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:31.483582 systemd-logind[1428]: New session 25 of user core. May 8 23:52:31.492026 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 23:52:31.507526 kubelet[2526]: I0508 23:52:31.507428 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-hostproc\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507526 kubelet[2526]: I0508 23:52:31.507471 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-etc-cni-netd\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507526 kubelet[2526]: I0508 23:52:31.507491 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f181feb2-4df1-41f4-874f-8c960d8c2c66-cilium-ipsec-secrets\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507526 kubelet[2526]: I0508 23:52:31.507508 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvf8\" (UniqueName: \"kubernetes.io/projected/f181feb2-4df1-41f4-874f-8c960d8c2c66-kube-api-access-rjvf8\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507526 kubelet[2526]: I0508 23:52:31.507524 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-host-proc-sys-kernel\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507539 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-cilium-cgroup\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507554 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-bpf-maps\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507569 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-cni-path\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507582 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-xtables-lock\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507595 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f181feb2-4df1-41f4-874f-8c960d8c2c66-hubble-tls\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507696 kubelet[2526]: I0508 23:52:31.507611 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-lib-modules\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507814 kubelet[2526]: I0508 23:52:31.507638 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f181feb2-4df1-41f4-874f-8c960d8c2c66-clustermesh-secrets\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507814 kubelet[2526]: I0508 23:52:31.507653 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-host-proc-sys-net\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507814 kubelet[2526]: I0508 23:52:31.507669 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f181feb2-4df1-41f4-874f-8c960d8c2c66-cilium-run\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.507814 kubelet[2526]: I0508 23:52:31.507683 2526 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f181feb2-4df1-41f4-874f-8c960d8c2c66-cilium-config-path\") pod \"cilium-5crdn\" (UID: \"f181feb2-4df1-41f4-874f-8c960d8c2c66\") " pod="kube-system/cilium-5crdn" May 8 23:52:31.541655 sshd[4350]: Connection closed by 10.0.0.1 port 58520 May 8 23:52:31.542167 sshd-session[4347]: pam_unix(sshd:session): session closed for user core May 8 23:52:31.553492 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:58520.service: Deactivated successfully. May 8 23:52:31.555465 systemd[1]: session-25.scope: Deactivated successfully. May 8 23:52:31.556791 systemd-logind[1428]: Session 25 logged out. Waiting for processes to exit. May 8 23:52:31.564128 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). May 8 23:52:31.564999 systemd-logind[1428]: Removed session 25. May 8 23:52:31.606250 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:Hm8V4TmgatMAzJ6FDPqyKbV5zO4XTeu7LfiSckNl/4Y May 8 23:52:31.607517 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:52:31.622896 systemd-logind[1428]: New session 26 of user core. May 8 23:52:31.633025 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 23:52:31.744764 kubelet[2526]: E0508 23:52:31.744733 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:31.745313 containerd[1443]: time="2025-05-08T23:52:31.745250072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5crdn,Uid:f181feb2-4df1-41f4-874f-8c960d8c2c66,Namespace:kube-system,Attempt:0,}" May 8 23:52:31.772358 containerd[1443]: time="2025-05-08T23:52:31.771784130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:52:31.773148 containerd[1443]: time="2025-05-08T23:52:31.772756655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:52:31.773148 containerd[1443]: time="2025-05-08T23:52:31.772782255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:31.773148 containerd[1443]: time="2025-05-08T23:52:31.772914576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:52:31.793058 systemd[1]: Started cri-containerd-6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc.scope - libcontainer container 6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc. May 8 23:52:31.820235 containerd[1443]: time="2025-05-08T23:52:31.820176981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5crdn,Uid:f181feb2-4df1-41f4-874f-8c960d8c2c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\"" May 8 23:52:31.821110 kubelet[2526]: E0508 23:52:31.821082 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:31.824502 containerd[1443]: time="2025-05-08T23:52:31.824426923Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:52:31.836454 containerd[1443]: time="2025-05-08T23:52:31.836248505Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1\"" May 8 23:52:31.836923 containerd[1443]: time="2025-05-08T23:52:31.836805668Z" level=info msg="StartContainer for \"4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1\"" May 8 23:52:31.863010 systemd[1]: Started cri-containerd-4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1.scope - libcontainer container 4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1. May 8 23:52:31.883747 containerd[1443]: time="2025-05-08T23:52:31.883700471Z" level=info msg="StartContainer for \"4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1\" returns successfully" May 8 23:52:31.916193 systemd[1]: cri-containerd-4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1.scope: Deactivated successfully. May 8 23:52:31.950523 containerd[1443]: time="2025-05-08T23:52:31.950434458Z" level=info msg="shim disconnected" id=4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1 namespace=k8s.io May 8 23:52:31.950523 containerd[1443]: time="2025-05-08T23:52:31.950492338Z" level=warning msg="cleaning up after shim disconnected" id=4071c2c54e749a976b0f614e0bc4a4a552fc47650316a4ace231ddccf688b2d1 namespace=k8s.io May 8 23:52:31.950523 containerd[1443]: time="2025-05-08T23:52:31.950502058Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:32.448270 kubelet[2526]: E0508 23:52:32.445968 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:32.450032 containerd[1443]: time="2025-05-08T23:52:32.449991554Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:52:32.481007 containerd[1443]: time="2025-05-08T23:52:32.480955671Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34\"" May 8 23:52:32.481595 containerd[1443]: time="2025-05-08T23:52:32.481541234Z" level=info msg="StartContainer for \"cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34\"" May 8 23:52:32.511001 systemd[1]: Started cri-containerd-cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34.scope - libcontainer container cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34. May 8 23:52:32.532127 containerd[1443]: time="2025-05-08T23:52:32.532027329Z" level=info msg="StartContainer for \"cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34\" returns successfully" May 8 23:52:32.541272 systemd[1]: cri-containerd-cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34.scope: Deactivated successfully. May 8 23:52:32.571732 containerd[1443]: time="2025-05-08T23:52:32.571471089Z" level=info msg="shim disconnected" id=cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34 namespace=k8s.io May 8 23:52:32.571732 containerd[1443]: time="2025-05-08T23:52:32.571526729Z" level=warning msg="cleaning up after shim disconnected" id=cd4c6a9365742ae95f43678830985572d58febe5d407afbfdffb4032825aac34 namespace=k8s.io May 8 23:52:32.571732 containerd[1443]: time="2025-05-08T23:52:32.571534369Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:33.449513 kubelet[2526]: E0508 23:52:33.449452 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:33.452980 containerd[1443]: time="2025-05-08T23:52:33.452817973Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:52:33.467437 containerd[1443]: time="2025-05-08T23:52:33.467391404Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63\"" May 8 23:52:33.468316 containerd[1443]: time="2025-05-08T23:52:33.468063208Z" level=info msg="StartContainer for \"b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63\"" May 8 23:52:33.496044 systemd[1]: Started cri-containerd-b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63.scope - libcontainer container b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63. May 8 23:52:33.521795 systemd[1]: cri-containerd-b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63.scope: Deactivated successfully. May 8 23:52:33.524943 containerd[1443]: time="2025-05-08T23:52:33.524898208Z" level=info msg="StartContainer for \"b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63\" returns successfully" May 8 23:52:33.541129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63-rootfs.mount: Deactivated successfully. May 8 23:52:33.554483 containerd[1443]: time="2025-05-08T23:52:33.554426274Z" level=info msg="shim disconnected" id=b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63 namespace=k8s.io May 8 23:52:33.554910 containerd[1443]: time="2025-05-08T23:52:33.554716555Z" level=warning msg="cleaning up after shim disconnected" id=b3a92d42e2a12495573ca5681f870e41d4bbde4b8a84bd81bca6e06ecd1b0a63 namespace=k8s.io May 8 23:52:33.554910 containerd[1443]: time="2025-05-08T23:52:33.554750995Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:34.453229 kubelet[2526]: E0508 23:52:34.452872 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:34.457183 containerd[1443]: time="2025-05-08T23:52:34.455947265Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:52:34.469498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755911014.mount: Deactivated successfully. May 8 23:52:34.471606 containerd[1443]: time="2025-05-08T23:52:34.471565820Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99\"" May 8 23:52:34.472474 containerd[1443]: time="2025-05-08T23:52:34.472446584Z" level=info msg="StartContainer for \"9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99\"" May 8 23:52:34.505026 systemd[1]: Started cri-containerd-9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99.scope - libcontainer container 9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99. May 8 23:52:34.524980 systemd[1]: cri-containerd-9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99.scope: Deactivated successfully. May 8 23:52:34.525656 containerd[1443]: time="2025-05-08T23:52:34.525623080Z" level=info msg="StartContainer for \"9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99\" returns successfully" May 8 23:52:34.543434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99-rootfs.mount: Deactivated successfully. May 8 23:52:34.547365 containerd[1443]: time="2025-05-08T23:52:34.547166624Z" level=info msg="shim disconnected" id=9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99 namespace=k8s.io May 8 23:52:34.547365 containerd[1443]: time="2025-05-08T23:52:34.547220904Z" level=warning msg="cleaning up after shim disconnected" id=9120ece69fa1a0254965352c7bc45f9de4adc46244d799de5c66683abf18ad99 namespace=k8s.io May 8 23:52:34.547365 containerd[1443]: time="2025-05-08T23:52:34.547228984Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:52:35.298686 kubelet[2526]: E0508 23:52:35.298641 2526 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:52:35.457557 kubelet[2526]: E0508 23:52:35.457389 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:35.462442 containerd[1443]: time="2025-05-08T23:52:35.462399369Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:52:35.476491 containerd[1443]: time="2025-05-08T23:52:35.476430955Z" level=info msg="CreateContainer within sandbox \"6007e4983b7e043bd3d075afe4a29cb4b3cd6fc3706215900f332115635dc0cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b701f9cd93d7ce3aba14035dc1909c17f419e000fe960b61765cde27968e783a\"" May 8 23:52:35.477121 containerd[1443]: time="2025-05-08T23:52:35.477082958Z" level=info msg="StartContainer for \"b701f9cd93d7ce3aba14035dc1909c17f419e000fe960b61765cde27968e783a\"" May 8 23:52:35.511065 systemd[1]: Started cri-containerd-b701f9cd93d7ce3aba14035dc1909c17f419e000fe960b61765cde27968e783a.scope - libcontainer container b701f9cd93d7ce3aba14035dc1909c17f419e000fe960b61765cde27968e783a. May 8 23:52:35.535827 containerd[1443]: time="2025-05-08T23:52:35.535688833Z" level=info msg="StartContainer for \"b701f9cd93d7ce3aba14035dc1909c17f419e000fe960b61765cde27968e783a\" returns successfully" May 8 23:52:35.807868 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 23:52:36.462495 kubelet[2526]: E0508 23:52:36.462345 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:36.479406 kubelet[2526]: I0508 23:52:36.479340 2526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5crdn" podStartSLOduration=5.479322401 podStartE2EDuration="5.479322401s" podCreationTimestamp="2025-05-08 23:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:52:36.47911136 +0000 UTC m=+81.310436471" watchObservedRunningTime="2025-05-08 23:52:36.479322401 +0000 UTC m=+81.310647512" May 8 23:52:36.980151 kubelet[2526]: I0508 23:52:36.980077 2526 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T23:52:36Z","lastTransitionTime":"2025-05-08T23:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 23:52:37.746152 kubelet[2526]: E0508 23:52:37.746113 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:38.079608 kubelet[2526]: E0508 23:52:38.079497 2526 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:41302->127.0.0.1:45041: read tcp 127.0.0.1:41302->127.0.0.1:45041: read: connection reset by peer May 8 23:52:38.727111 systemd-networkd[1376]: lxc_health: Link UP May 8 23:52:38.734505 systemd-networkd[1376]: lxc_health: Gained carrier May 8 23:52:39.747248 kubelet[2526]: E0508 23:52:39.746555 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:39.823093 systemd-networkd[1376]: lxc_health: Gained IPv6LL May 8 23:52:40.468967 kubelet[2526]: E0508 23:52:40.468909 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:41.470011 kubelet[2526]: E0508 23:52:41.469970 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:44.243278 kubelet[2526]: E0508 23:52:44.243249 2526 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 23:52:44.459501 sshd[4362]: Connection closed by 10.0.0.1 port 58524 May 8 23:52:44.459425 sshd-session[4356]: pam_unix(sshd:session): session closed for user core May 8 23:52:44.463089 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:58524.service: Deactivated successfully. May 8 23:52:44.464734 systemd[1]: session-26.scope: Deactivated successfully. May 8 23:52:44.467117 systemd-logind[1428]: Session 26 logged out. Waiting for processes to exit. May 8 23:52:44.468481 systemd-logind[1428]: Removed session 26.